Merge lp:~james-page/charms/trusty/odl-controller/productionize into lp:~openstack-charmers-archive/charms/trusty/odl-controller/next

Proposed by James Page
Status: Needs review
Proposed branch: lp:~james-page/charms/trusty/odl-controller/productionize
Merge into: lp:~openstack-charmers-archive/charms/trusty/odl-controller/next
Diff against target: 3084 lines (+1448/-346)
37 files modified
.project (+17/-0)
.pydevproject (+10/-0)
config.yaml (+4/-0)
files/odl-controller.service (+12/-0)
hooks/charmhelpers/contrib/network/ip.py (+36/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+10/-4)
hooks/charmhelpers/contrib/openstack/context.py (+48/-10)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+18/-8)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+19/-11)
hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken (+11/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+218/-69)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+441/-59)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/hookenv.py (+41/-7)
hooks/charmhelpers/core/host.py (+103/-43)
hooks/charmhelpers/core/services/helpers.py (+11/-5)
hooks/charmhelpers/core/templating.py (+13/-7)
hooks/charmhelpers/fetch/__init__.py (+9/-1)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/odl_controller_hooks.py (+26/-11)
hooks/odl_controller_utils.py (+57/-4)
hooks/odl_helper.py (+96/-0)
tests/015-basic-trusty-icehouse (+1/-1)
tests/016-basic-trusty-juno (+1/-1)
tests/017-basic-trusty-kilo (+1/-1)
tests/018-basic-trusty-liberty (+1/-1)
tests/019-basic-trusty-mitaka (+1/-1)
tests/020-basic-wily-liberty (+1/-1)
tests/021-basic-xenial-mitaka (+1/-1)
tests/basic_deployment.py (+2/-2)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-4)
unit_tests/test_odl_controller_hooks.py (+40/-2)
unit_tests/test_odl_controller_utils.py (+95/-1)
To merge this branch: bzr merge lp:~james-page/charms/trusty/odl-controller/productionize
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+286718@code.launchpad.net
To post a comment you must log in.
46. By James Page

Resync helpers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #838 odl-controller-next for james-page mp286718
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/838/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #939 odl-controller-next for james-page mp286718
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/15136684/
Build: http://10.245.162.36:8080/job/charm_lint_check/939/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #384 odl-controller-next for james-page mp286718
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15137306/
Build: http://10.245.162.36:8080/job/charm_amulet_test/384/

47. By James Page

Tidy pep8

48. By James Page

Tidy unit tests

49. By James Page

Update profiles to include aaa-api for password management

50. By James Page

Make venv compat

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #1166 odl-controller-next for james-page mp286718
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/1166/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #1001 odl-controller-next for james-page mp286718
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/15170675/
Build: http://10.245.162.36:8080/job/charm_unit_test/1001/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #445 odl-controller-next for james-page mp286718
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15171058/
Build: http://10.245.162.36:8080/job/charm_amulet_test/445/

Unmerged revisions

50. By James Page

Make venv compat

49. By James Page

Update profiles to include aaa-api for password management

48. By James Page

Tidy unit tests

47. By James Page

Tidy pep8

46. By James Page

Resync helpers

45. By James Page

Merge opnfv stuff

44. By James Page

Rebase

43. By James Page

Merge amulet test fixes for neutron

42. By James Page

Resync helpers

41. By James Page

Rebase

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file '.project'
2--- .project 1970-01-01 00:00:00 +0000
3+++ .project 2016-02-22 12:50:59 +0000
4@@ -0,0 +1,17 @@
5+<?xml version="1.0" encoding="UTF-8"?>
6+<projectDescription>
7+ <name>odl-controller</name>
8+ <comment></comment>
9+ <projects>
10+ </projects>
11+ <buildSpec>
12+ <buildCommand>
13+ <name>org.python.pydev.PyDevBuilder</name>
14+ <arguments>
15+ </arguments>
16+ </buildCommand>
17+ </buildSpec>
18+ <natures>
19+ <nature>org.python.pydev.pythonNature</nature>
20+ </natures>
21+</projectDescription>
22
23=== added file '.pydevproject'
24--- .pydevproject 1970-01-01 00:00:00 +0000
25+++ .pydevproject 2016-02-22 12:50:59 +0000
26@@ -0,0 +1,10 @@
27+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
28+<?eclipse-pydev version="1.0"?><pydev_project>
29+<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>
30+<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
31+<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
32+<path>/odl-controller/unit_tests</path>
33+<path>/odl-controller/hooks</path>
34+<path>/odl-controller/tests</path>
35+</pydev_pathproperty>
36+</pydev_project>
37
38=== modified file 'config.yaml'
39--- config.yaml 2016-02-19 16:37:59 +0000
40+++ config.yaml 2016-02-22 12:50:59 +0000
41@@ -37,3 +37,7 @@
42 type: string
43 default: ''
44 description: Proxy to use for https connections for OpenDayLight
45+ admin-password:
46+ type: string
47+ default:
48+ description: Password for admin user; randomly generated if not provided.
49
50=== added file 'files/odl-controller.service'
51--- files/odl-controller.service 1970-01-01 00:00:00 +0000
52+++ files/odl-controller.service 2016-02-22 12:50:59 +0000
53@@ -0,0 +1,12 @@
54+[Unit]
55+Description=OpenDayLight SDN Controller
56+After=network.target
57+
58+[Service]
59+Type=forking
60+User=opendaylight
61+Group=opendaylight
62+ExecStart=/opt/opendaylight-karaf/bin/start
63+
64+[Install]
65+WantedBy=multi-user.target
66
67=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
68--- hooks/charmhelpers/contrib/network/ip.py 2015-11-11 14:57:38 +0000
69+++ hooks/charmhelpers/contrib/network/ip.py 2016-02-22 12:50:59 +0000
70@@ -53,7 +53,7 @@
71
72
73 def no_ip_found_error_out(network):
74- errmsg = ("No IP address found in network: %s" % network)
75+ errmsg = ("No IP address found in network(s): %s" % network)
76 raise ValueError(errmsg)
77
78
79@@ -61,7 +61,7 @@
80 """Get an IPv4 or IPv6 address within the network from the host.
81
82 :param network (str): CIDR presentation format. For example,
83- '192.168.1.0/24'.
84+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
85 :param fallback (str): If no address is found, return fallback.
86 :param fatal (boolean): If no address is found, fallback is not
87 set and fatal is True then exit(1).
88@@ -75,24 +75,26 @@
89 else:
90 return None
91
92- _validate_cidr(network)
93- network = netaddr.IPNetwork(network)
94- for iface in netifaces.interfaces():
95- addresses = netifaces.ifaddresses(iface)
96- if network.version == 4 and netifaces.AF_INET in addresses:
97- addr = addresses[netifaces.AF_INET][0]['addr']
98- netmask = addresses[netifaces.AF_INET][0]['netmask']
99- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
100- if cidr in network:
101- return str(cidr.ip)
102+ networks = network.split() or [network]
103+ for network in networks:
104+ _validate_cidr(network)
105+ network = netaddr.IPNetwork(network)
106+ for iface in netifaces.interfaces():
107+ addresses = netifaces.ifaddresses(iface)
108+ if network.version == 4 and netifaces.AF_INET in addresses:
109+ addr = addresses[netifaces.AF_INET][0]['addr']
110+ netmask = addresses[netifaces.AF_INET][0]['netmask']
111+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
112+ if cidr in network:
113+ return str(cidr.ip)
114
115- if network.version == 6 and netifaces.AF_INET6 in addresses:
116- for addr in addresses[netifaces.AF_INET6]:
117- if not addr['addr'].startswith('fe80'):
118- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
119- addr['netmask']))
120- if cidr in network:
121- return str(cidr.ip)
122+ if network.version == 6 and netifaces.AF_INET6 in addresses:
123+ for addr in addresses[netifaces.AF_INET6]:
124+ if not addr['addr'].startswith('fe80'):
125+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
126+ addr['netmask']))
127+ if cidr in network:
128+ return str(cidr.ip)
129
130 if fallback is not None:
131 return fallback
132@@ -454,3 +456,18 @@
133 return result
134 else:
135 return result.split('.')[0]
136+
137+
138+def port_has_listener(address, port):
139+ """
140+ Returns True if the address:port is open and being listened to,
141+ else False.
142+
143+ @param address: an IP address or hostname
144+ @param port: integer port
145+
146+ Note calls 'zc' via a subprocess shell
147+ """
148+ cmd = ['nc', '-z', address, str(port)]
149+ result = subprocess.call(cmd)
150+ return not(bool(result))
151
152=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
153--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 14:57:38 +0000
154+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-22 12:50:59 +0000
155@@ -121,10 +121,12 @@
156
157 # Charms which should use the source config option
158 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
159- 'ceph-osd', 'ceph-radosgw']
160+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
161
162 # Charms which can not use openstack-origin, ie. many subordinates
163- no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
164+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
165+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
166+ 'cinder-backup']
167
168 if self.openstack:
169 for svc in services:
170@@ -224,7 +226,8 @@
171 self.precise_havana, self.precise_icehouse,
172 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
173 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
174- self.wily_liberty) = range(12)
175+ self.wily_liberty, self.trusty_mitaka,
176+ self.xenial_mitaka) = range(14)
177
178 releases = {
179 ('precise', None): self.precise_essex,
180@@ -236,9 +239,11 @@
181 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
182 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
183 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
184+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
185 ('utopic', None): self.utopic_juno,
186 ('vivid', None): self.vivid_kilo,
187- ('wily', None): self.wily_liberty}
188+ ('wily', None): self.wily_liberty,
189+ ('xenial', None): self.xenial_mitaka}
190 return releases[(self.series, self.openstack)]
191
192 def _get_openstack_release_string(self):
193@@ -255,6 +260,7 @@
194 ('utopic', 'juno'),
195 ('vivid', 'kilo'),
196 ('wily', 'liberty'),
197+ ('xenial', 'mitaka'),
198 ])
199 if self.openstack:
200 os_origin = self.openstack.split(':')[1]
201
202=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
203--- hooks/charmhelpers/contrib/openstack/context.py 2015-11-11 14:57:38 +0000
204+++ hooks/charmhelpers/contrib/openstack/context.py 2016-02-22 12:50:59 +0000
205@@ -57,6 +57,7 @@
206 get_nic_hwaddr,
207 mkdir,
208 write_file,
209+ pwgen,
210 )
211 from charmhelpers.contrib.hahelpers.cluster import (
212 determine_apache_port,
213@@ -87,6 +88,14 @@
214 is_bridge_member,
215 )
216 from charmhelpers.contrib.openstack.utils import get_host_ip
217+from charmhelpers.core.unitdata import kv
218+
219+try:
220+ import psutil
221+except ImportError:
222+ apt_install('python-psutil', fatal=True)
223+ import psutil
224+
225 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
226 ADDRESS_TYPES = ['admin', 'internal', 'public']
227
228@@ -401,6 +410,7 @@
229 auth_host = format_ipv6_addr(auth_host) or auth_host
230 svc_protocol = rdata.get('service_protocol') or 'http'
231 auth_protocol = rdata.get('auth_protocol') or 'http'
232+ api_version = rdata.get('api_version') or '2.0'
233 ctxt.update({'service_port': rdata.get('service_port'),
234 'service_host': serv_host,
235 'auth_host': auth_host,
236@@ -409,7 +419,8 @@
237 'admin_user': rdata.get('service_username'),
238 'admin_password': rdata.get('service_password'),
239 'service_protocol': svc_protocol,
240- 'auth_protocol': auth_protocol})
241+ 'auth_protocol': auth_protocol,
242+ 'api_version': api_version})
243
244 if self.context_complete(ctxt):
245 # NOTE(jamespage) this is required for >= icehouse
246@@ -626,15 +637,28 @@
247 if config('haproxy-client-timeout'):
248 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
249
250+ if config('haproxy-queue-timeout'):
251+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
252+
253+ if config('haproxy-connect-timeout'):
254+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
255+
256 if config('prefer-ipv6'):
257 ctxt['ipv6'] = True
258 ctxt['local_host'] = 'ip6-localhost'
259 ctxt['haproxy_host'] = '::'
260- ctxt['stat_port'] = ':::8888'
261 else:
262 ctxt['local_host'] = '127.0.0.1'
263 ctxt['haproxy_host'] = '0.0.0.0'
264- ctxt['stat_port'] = ':8888'
265+
266+ ctxt['stat_port'] = '8888'
267+
268+ db = kv()
269+ ctxt['stat_password'] = db.get('stat-password')
270+ if not ctxt['stat_password']:
271+ ctxt['stat_password'] = db.set('stat-password',
272+ pwgen(32))
273+ db.flush()
274
275 for frontend in cluster_hosts:
276 if (len(cluster_hosts[frontend]['backends']) > 1 or
277@@ -1088,6 +1112,20 @@
278 config_flags_parser(config_flags)}
279
280
281+class LibvirtConfigFlagsContext(OSContextGenerator):
282+ """
283+ This context provides support for extending
284+ the libvirt section through user-defined flags.
285+ """
286+ def __call__(self):
287+ ctxt = {}
288+ libvirt_flags = config('libvirt-flags')
289+ if libvirt_flags:
290+ ctxt['libvirt_flags'] = config_flags_parser(
291+ libvirt_flags)
292+ return ctxt
293+
294+
295 class SubordinateConfigContext(OSContextGenerator):
296
297 """
298@@ -1228,13 +1266,11 @@
299
300 @property
301 def num_cpus(self):
302- try:
303- from psutil import NUM_CPUS
304- except ImportError:
305- apt_install('python-psutil', fatal=True)
306- from psutil import NUM_CPUS
307-
308- return NUM_CPUS
309+ # NOTE: use cpu_count if present (16.04 support)
310+ if hasattr(psutil, 'cpu_count'):
311+ return psutil.cpu_count()
312+ else:
313+ return psutil.NUM_CPUS
314
315 def __call__(self):
316 multiplier = config('worker-multiplier') or 0
317@@ -1437,6 +1473,8 @@
318 rdata.get('service_protocol') or 'http',
319 'auth_protocol':
320 rdata.get('auth_protocol') or 'http',
321+ 'api_version':
322+ rdata.get('api_version') or '2.0',
323 }
324 if self.context_complete(ctxt):
325 return ctxt
326
327=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
328--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-07-22 12:10:31 +0000
329+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-02-22 12:50:59 +0000
330@@ -9,15 +9,17 @@
331 CRITICAL=0
332 NOTACTIVE=''
333 LOGFILE=/var/log/nagios/check_haproxy.log
334-AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
335+AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
336
337-for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
338+typeset -i N_INSTANCES=0
339+for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
340 do
341- output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
342+ N_INSTANCES=N_INSTANCES+1
343+ output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
344 if [ $? != 0 ]; then
345 date >> $LOGFILE
346 echo $output >> $LOGFILE
347- /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&1
348+ /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
349 CRITICAL=1
350 NOTACTIVE="${NOTACTIVE} $appserver"
351 fi
352@@ -28,5 +30,5 @@
353 exit 2
354 fi
355
356-echo "OK: All haproxy instances looking good"
357+echo "OK: All haproxy instances ($N_INSTANCES) looking good"
358 exit 0
359
360=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
361--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-11 14:57:38 +0000
362+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-02-22 12:50:59 +0000
363@@ -50,7 +50,7 @@
364 if kernel_version() >= (3, 13):
365 return []
366 else:
367- return ['openvswitch-datapath-dkms']
368+ return [headers_package(), 'openvswitch-datapath-dkms']
369
370
371 # legacy
372@@ -70,7 +70,7 @@
373 relation_prefix='neutron',
374 ssl_dir=QUANTUM_CONF_DIR)],
375 'services': ['quantum-plugin-openvswitch-agent'],
376- 'packages': [[headers_package()] + determine_dkms_package(),
377+ 'packages': [determine_dkms_package(),
378 ['quantum-plugin-openvswitch-agent']],
379 'server_packages': ['quantum-server',
380 'quantum-plugin-openvswitch'],
381@@ -111,7 +111,7 @@
382 relation_prefix='neutron',
383 ssl_dir=NEUTRON_CONF_DIR)],
384 'services': ['neutron-plugin-openvswitch-agent'],
385- 'packages': [[headers_package()] + determine_dkms_package(),
386+ 'packages': [determine_dkms_package(),
387 ['neutron-plugin-openvswitch-agent']],
388 'server_packages': ['neutron-server',
389 'neutron-plugin-openvswitch'],
390@@ -155,7 +155,7 @@
391 relation_prefix='neutron',
392 ssl_dir=NEUTRON_CONF_DIR)],
393 'services': [],
394- 'packages': [[headers_package()] + determine_dkms_package(),
395+ 'packages': [determine_dkms_package(),
396 ['neutron-plugin-cisco']],
397 'server_packages': ['neutron-server',
398 'neutron-plugin-cisco'],
399@@ -174,7 +174,7 @@
400 'neutron-dhcp-agent',
401 'nova-api-metadata',
402 'etcd'],
403- 'packages': [[headers_package()] + determine_dkms_package(),
404+ 'packages': [determine_dkms_package(),
405 ['calico-compute',
406 'bird',
407 'neutron-dhcp-agent',
408@@ -204,8 +204,8 @@
409 database=config('database'),
410 ssl_dir=NEUTRON_CONF_DIR)],
411 'services': [],
412- 'packages': [['plumgrid-lxc'],
413- ['iovisor-dkms']],
414+ 'packages': ['plumgrid-lxc',
415+ 'iovisor-dkms'],
416 'server_packages': ['neutron-server',
417 'neutron-plugin-plumgrid'],
418 'server_services': ['neutron-server']
419@@ -219,7 +219,7 @@
420 relation_prefix='neutron',
421 ssl_dir=NEUTRON_CONF_DIR)],
422 'services': [],
423- 'packages': [[headers_package()] + determine_dkms_package()],
424+ 'packages': [determine_dkms_package()],
425 'server_packages': ['neutron-server',
426 'python-neutron-plugin-midonet'],
427 'server_services': ['neutron-server']
428@@ -233,6 +233,16 @@
429 'neutron-plugin-ml2']
430 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
431 plugins['nvp'] = plugins['nsx']
432+ if release >= 'kilo':
433+ plugins['midonet']['driver'] = (
434+ 'neutron.plugins.midonet.plugin.MidonetPluginV2')
435+ if release >= 'liberty':
436+ plugins['midonet']['driver'] = (
437+ 'midonet.neutron.plugin_v1.MidonetPluginV2')
438+ plugins['midonet']['server_packages'].remove(
439+ 'python-neutron-plugin-midonet')
440+ plugins['midonet']['server_packages'].append(
441+ 'python-networking-midonet')
442 return plugins
443
444
445
446=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
447--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-02-19 22:08:13 +0000
448+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-02-22 12:50:59 +0000
449@@ -12,27 +12,35 @@
450 option tcplog
451 option dontlognull
452 retries 3
453- timeout queue 1000
454- timeout connect 1000
455-{% if haproxy_client_timeout -%}
456+{%- if haproxy_queue_timeout %}
457+ timeout queue {{ haproxy_queue_timeout }}
458+{%- else %}
459+ timeout queue 5000
460+{%- endif %}
461+{%- if haproxy_connect_timeout %}
462+ timeout connect {{ haproxy_connect_timeout }}
463+{%- else %}
464+ timeout connect 5000
465+{%- endif %}
466+{%- if haproxy_client_timeout %}
467 timeout client {{ haproxy_client_timeout }}
468-{% else -%}
469+{%- else %}
470 timeout client 30000
471-{% endif -%}
472-
473-{% if haproxy_server_timeout -%}
474+{%- endif %}
475+{%- if haproxy_server_timeout %}
476 timeout server {{ haproxy_server_timeout }}
477-{% else -%}
478+{%- else %}
479 timeout server 30000
480-{% endif -%}
481+{%- endif %}
482
483-listen stats {{ stat_port }}
484+listen stats
485+ bind {{ local_host }}:{{ stat_port }}
486 mode http
487 stats enable
488 stats hide-version
489 stats realm Haproxy\ Statistics
490 stats uri /
491- stats auth admin:password
492+ stats auth admin:{{ stat_password }}
493
494 {% if frontends -%}
495 {% for service, ports in service_ports.items() -%}
496
497=== modified file 'hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken'
498--- hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken 2015-07-22 12:10:31 +0000
499+++ hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken 2016-02-22 12:50:59 +0000
500@@ -1,4 +1,14 @@
501 {% if auth_host -%}
502+{% if api_version == '3' -%}
503+[keystone_authtoken]
504+auth_url = {{ service_protocol }}://{{ service_host }}:{{ service_port }}
505+project_name = {{ admin_tenant_name }}
506+username = {{ admin_user }}
507+password = {{ admin_password }}
508+project_domain_name = default
509+user_domain_name = default
510+auth_plugin = password
511+{% else -%}
512 [keystone_authtoken]
513 identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/{{ auth_admin_prefix }}
514 auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }}
515@@ -7,3 +17,4 @@
516 admin_password = {{ admin_password }}
517 signing_dir = {{ signing_dir }}
518 {% endif -%}
519+{% endif -%}
520
521=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
522--- hooks/charmhelpers/contrib/openstack/utils.py 2015-11-11 14:57:38 +0000
523+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-02-22 12:50:59 +0000
524@@ -23,8 +23,10 @@
525 import os
526 import sys
527 import re
528+import itertools
529
530 import six
531+import tempfile
532 import traceback
533 import uuid
534 import yaml
535@@ -41,6 +43,7 @@
536 config,
537 log as juju_log,
538 charm_dir,
539+ DEBUG,
540 INFO,
541 related_units,
542 relation_ids,
543@@ -58,6 +61,7 @@
544 from charmhelpers.contrib.network.ip import (
545 get_ipv6_addr,
546 is_ipv6,
547+ port_has_listener,
548 )
549
550 from charmhelpers.contrib.python.packages import (
551@@ -65,7 +69,7 @@
552 pip_install,
553 )
554
555-from charmhelpers.core.host import lsb_release, mounts, umount
556+from charmhelpers.core.host import lsb_release, mounts, umount, service_running
557 from charmhelpers.fetch import apt_install, apt_cache, install_remote
558 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
559 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
560@@ -86,6 +90,7 @@
561 ('utopic', 'juno'),
562 ('vivid', 'kilo'),
563 ('wily', 'liberty'),
564+ ('xenial', 'mitaka'),
565 ])
566
567
568@@ -99,61 +104,70 @@
569 ('2014.2', 'juno'),
570 ('2015.1', 'kilo'),
571 ('2015.2', 'liberty'),
572+ ('2016.1', 'mitaka'),
573 ])
574
575-# The ugly duckling
576+# The ugly duckling - must list releases oldest to newest
577 SWIFT_CODENAMES = OrderedDict([
578- ('1.4.3', 'diablo'),
579- ('1.4.8', 'essex'),
580- ('1.7.4', 'folsom'),
581- ('1.8.0', 'grizzly'),
582- ('1.7.7', 'grizzly'),
583- ('1.7.6', 'grizzly'),
584- ('1.10.0', 'havana'),
585- ('1.9.1', 'havana'),
586- ('1.9.0', 'havana'),
587- ('1.13.1', 'icehouse'),
588- ('1.13.0', 'icehouse'),
589- ('1.12.0', 'icehouse'),
590- ('1.11.0', 'icehouse'),
591- ('2.0.0', 'juno'),
592- ('2.1.0', 'juno'),
593- ('2.2.0', 'juno'),
594- ('2.2.1', 'kilo'),
595- ('2.2.2', 'kilo'),
596- ('2.3.0', 'liberty'),
597- ('2.4.0', 'liberty'),
598- ('2.5.0', 'liberty'),
599+ ('diablo',
600+ ['1.4.3']),
601+ ('essex',
602+ ['1.4.8']),
603+ ('folsom',
604+ ['1.7.4']),
605+ ('grizzly',
606+ ['1.7.6', '1.7.7', '1.8.0']),
607+ ('havana',
608+ ['1.9.0', '1.9.1', '1.10.0']),
609+ ('icehouse',
610+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
611+ ('juno',
612+ ['2.0.0', '2.1.0', '2.2.0']),
613+ ('kilo',
614+ ['2.2.1', '2.2.2']),
615+ ('liberty',
616+ ['2.3.0', '2.4.0', '2.5.0']),
617+ ('mitaka',
618+ ['2.5.0']),
619 ])
620
621 # >= Liberty version->codename mapping
622 PACKAGE_CODENAMES = {
623 'nova-common': OrderedDict([
624- ('12.0.0', 'liberty'),
625+ ('12.0', 'liberty'),
626+ ('13.0', 'mitaka'),
627 ]),
628 'neutron-common': OrderedDict([
629- ('7.0.0', 'liberty'),
630+ ('7.0', 'liberty'),
631+ ('8.0', 'mitaka'),
632 ]),
633 'cinder-common': OrderedDict([
634- ('7.0.0', 'liberty'),
635+ ('7.0', 'liberty'),
636+ ('8.0', 'mitaka'),
637 ]),
638 'keystone': OrderedDict([
639- ('8.0.0', 'liberty'),
640+ ('8.0', 'liberty'),
641+ ('9.0', 'mitaka'),
642 ]),
643 'horizon-common': OrderedDict([
644- ('8.0.0', 'liberty'),
645+ ('8.0', 'liberty'),
646+ ('9.0', 'mitaka'),
647 ]),
648 'ceilometer-common': OrderedDict([
649- ('5.0.0', 'liberty'),
650+ ('5.0', 'liberty'),
651+ ('6.0', 'mitaka'),
652 ]),
653 'heat-common': OrderedDict([
654- ('5.0.0', 'liberty'),
655+ ('5.0', 'liberty'),
656+ ('6.0', 'mitaka'),
657 ]),
658 'glance-common': OrderedDict([
659- ('11.0.0', 'liberty'),
660+ ('11.0', 'liberty'),
661+ ('12.0', 'mitaka'),
662 ]),
663 'openstack-dashboard': OrderedDict([
664- ('8.0.0', 'liberty'),
665+ ('8.0', 'liberty'),
666+ ('9.0', 'mitaka'),
667 ]),
668 }
669
670@@ -216,6 +230,33 @@
671 error_out(e)
672
673
674+def get_os_version_codename_swift(codename):
675+ '''Determine OpenStack version number of swift from codename.'''
676+ for k, v in six.iteritems(SWIFT_CODENAMES):
677+ if k == codename:
678+ return v[-1]
679+ e = 'Could not derive swift version for '\
680+ 'codename: %s' % codename
681+ error_out(e)
682+
683+
684+def get_swift_codename(version):
685+ '''Determine OpenStack codename that corresponds to swift version.'''
686+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
687+ if len(codenames) > 1:
688+ # If more than one release codename contains this version we determine
689+ # the actual codename based on the highest available install source.
690+ for codename in reversed(codenames):
691+ releases = UBUNTU_OPENSTACK_RELEASE
692+ release = [k for k, v in six.iteritems(releases) if codename in v]
693+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
694+ if codename in ret or release[0] in ret:
695+ return codename
696+ elif len(codenames) == 1:
697+ return codenames[0]
698+ return None
699+
700+
701 def get_os_codename_package(package, fatal=True):
702 '''Derive OpenStack release codename from an installed package.'''
703 import apt_pkg as apt
704@@ -240,7 +281,14 @@
705 error_out(e)
706
707 vers = apt.upstream_version(pkg.current_ver.ver_str)
708- match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
709+ if 'swift' in pkg.name:
710+ # Fully x.y.z match for swift versions
711+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
712+ else:
713+ # x.y match only for 20XX.X
714+ # and ignore patch level for other packages
715+ match = re.match('^(\d+)\.(\d+)', vers)
716+
717 if match:
718 vers = match.group(0)
719
720@@ -252,13 +300,8 @@
721 # < Liberty co-ordinated project versions
722 try:
723 if 'swift' in pkg.name:
724- swift_vers = vers[:5]
725- if swift_vers not in SWIFT_CODENAMES:
726- # Deal with 1.10.0 upward
727- swift_vers = vers[:6]
728- return SWIFT_CODENAMES[swift_vers]
729+ return get_swift_codename(vers)
730 else:
731- vers = vers[:6]
732 return OPENSTACK_CODENAMES[vers]
733 except KeyError:
734 if not fatal:
735@@ -276,12 +319,14 @@
736
737 if 'swift' in pkg:
738 vers_map = SWIFT_CODENAMES
739+ for cname, version in six.iteritems(vers_map):
740+ if cname == codename:
741+ return version[-1]
742 else:
743 vers_map = OPENSTACK_CODENAMES
744-
745- for version, cname in six.iteritems(vers_map):
746- if cname == codename:
747- return version
748+ for version, cname in six.iteritems(vers_map):
749+ if cname == codename:
750+ return version
751 # e = "Could not determine OpenStack version for package: %s" % pkg
752 # error_out(e)
753
754@@ -306,12 +351,42 @@
755
756
757 def import_key(keyid):
758- cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
759- "--recv-keys %s" % keyid
760- try:
761- subprocess.check_call(cmd.split(' '))
762- except subprocess.CalledProcessError:
763- error_out("Error importing repo key %s" % keyid)
764+ key = keyid.strip()
765+ if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
766+ key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
767+ juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
768+ juju_log("Importing ASCII Armor PGP key", level=DEBUG)
769+ with tempfile.NamedTemporaryFile() as keyfile:
770+ with open(keyfile.name, 'w') as fd:
771+ fd.write(key)
772+ fd.write("\n")
773+
774+ cmd = ['apt-key', 'add', keyfile.name]
775+ try:
776+ subprocess.check_call(cmd)
777+ except subprocess.CalledProcessError:
778+ error_out("Error importing PGP key '%s'" % key)
779+ else:
780+ juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
781+ juju_log("Importing PGP key from keyserver", level=DEBUG)
782+ cmd = ['apt-key', 'adv', '--keyserver',
783+ 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
784+ try:
785+ subprocess.check_call(cmd)
786+ except subprocess.CalledProcessError:
787+ error_out("Error importing PGP key '%s'" % key)
788+
789+
790+def get_source_and_pgp_key(input):
791+ """Look for a pgp key ID or ascii-armor key in the given input."""
792+ index = input.strip()
793+ index = input.rfind('|')
794+ if index < 0:
795+ return input, None
796+
797+ key = input[index + 1:].strip('|')
798+ source = input[:index]
799+ return source, key
800
801
802 def configure_installation_source(rel):
803@@ -323,16 +398,16 @@
804 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
805 f.write(DISTRO_PROPOSED % ubuntu_rel)
806 elif rel[:4] == "ppa:":
807- src = rel
808+ src, key = get_source_and_pgp_key(rel)
809+ if key:
810+ import_key(key)
811+
812 subprocess.check_call(["add-apt-repository", "-y", src])
813 elif rel[:3] == "deb":
814- l = len(rel.split('|'))
815- if l == 2:
816- src, key = rel.split('|')
817- juju_log("Importing PPA key from keyserver for %s" % src)
818+ src, key = get_source_and_pgp_key(rel)
819+ if key:
820 import_key(key)
821- elif l == 1:
822- src = rel
823+
824 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
825 f.write(src)
826 elif rel[:6] == 'cloud:':
827@@ -377,6 +452,9 @@
828 'liberty': 'trusty-updates/liberty',
829 'liberty/updates': 'trusty-updates/liberty',
830 'liberty/proposed': 'trusty-proposed/liberty',
831+ 'mitaka': 'trusty-updates/mitaka',
832+ 'mitaka/updates': 'trusty-updates/mitaka',
833+ 'mitaka/proposed': 'trusty-proposed/mitaka',
834 }
835
836 try:
837@@ -444,11 +522,16 @@
838 cur_vers = get_os_version_package(package)
839 if "swift" in package:
840 codename = get_os_codename_install_source(src)
841- available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)
842+ avail_vers = get_os_version_codename_swift(codename)
843 else:
844- available_vers = get_os_version_install_source(src)
845+ avail_vers = get_os_version_install_source(src)
846 apt.init()
847- return apt.version_compare(available_vers, cur_vers) == 1
848+ if "swift" in package:
849+ major_cur_vers = cur_vers.split('.', 1)[0]
850+ major_avail_vers = avail_vers.split('.', 1)[0]
851+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
852+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
853+ return apt.version_compare(avail_vers, cur_vers) == 1
854
855
856 def ensure_block_device(block_device):
857@@ -577,7 +660,7 @@
858 return yaml.load(projects_yaml)
859
860
861-def git_clone_and_install(projects_yaml, core_project, depth=1):
862+def git_clone_and_install(projects_yaml, core_project):
863 """
864 Clone/install all specified OpenStack repositories.
865
866@@ -627,6 +710,9 @@
867 for p in projects['repositories']:
868 repo = p['repository']
869 branch = p['branch']
870+ depth = '1'
871+ if 'depth' in p.keys():
872+ depth = p['depth']
873 if p['name'] == 'requirements':
874 repo_dir = _git_clone_and_install_single(repo, branch, depth,
875 parent_dir, http_proxy,
876@@ -671,19 +757,13 @@
877 """
878 Clone and install a single git repository.
879 """
880- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
881-
882 if not os.path.exists(parent_dir):
883 juju_log('Directory already exists at {}. '
884 'No need to create directory.'.format(parent_dir))
885 os.mkdir(parent_dir)
886
887- if not os.path.exists(dest_dir):
888- juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
889- repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
890- depth=depth)
891- else:
892- repo_dir = dest_dir
893+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
894+ repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth)
895
896 venv = os.path.join(parent_dir, 'venv')
897
898@@ -782,13 +862,23 @@
899 return wrap
900
901
902-def set_os_workload_status(configs, required_interfaces, charm_func=None):
903+def set_os_workload_status(configs, required_interfaces, charm_func=None, services=None, ports=None):
904 """
905 Set workload status based on complete contexts.
906 status-set missing or incomplete contexts
907 and juju-log details of missing required data.
908 charm_func is a charm specific function to run checking
909 for charm specific requirements such as a VIP setting.
910+
911+ This function also checks for whether the services defined are ACTUALLY
912+ running and that the ports they advertise are open and being listened to.
913+
914+ @param services - OPTIONAL: a [{'service': <string>, 'ports': [<int>]]
915+ The ports are optional.
916+ If services is a [<string>] then ports are ignored.
917+ @param ports - OPTIONAL: an [<int>] representing ports that shoudl be
918+ open.
919+ @returns None
920 """
921 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
922 state = 'active'
923@@ -867,6 +957,65 @@
924 else:
925 message = charm_message
926
927+ # If the charm thinks the unit is active, check that the actual services
928+ # really are active.
929+ if services is not None and state == 'active':
930+ # if we're passed the dict() then just grab the values as a list.
931+ if isinstance(services, dict):
932+ services = services.values()
933+ # either extract the list of services from the dictionary, or if
934+ # it is a simple string, use that. i.e. works with mixed lists.
935+ _s = []
936+ for s in services:
937+ if isinstance(s, dict) and 'service' in s:
938+ _s.append(s['service'])
939+ if isinstance(s, str):
940+ _s.append(s)
941+ services_running = [service_running(s) for s in _s]
942+ if not all(services_running):
943+ not_running = [s for s, running in zip(_s, services_running)
944+ if not running]
945+ message = ("Services not running that should be: {}"
946+ .format(", ".join(not_running)))
947+ state = 'blocked'
948+ # also verify that the ports that should be open are open
949+ # NB, that ServiceManager objects only OPTIONALLY have ports
950+ port_map = OrderedDict([(s['service'], s['ports'])
951+ for s in services if 'ports' in s])
952+ if state == 'active' and port_map:
953+ all_ports = list(itertools.chain(*port_map.values()))
954+ ports_open = [port_has_listener('0.0.0.0', p)
955+ for p in all_ports]
956+ if not all(ports_open):
957+ not_opened = [p for p, opened in zip(all_ports, ports_open)
958+ if not opened]
959+ map_not_open = OrderedDict()
960+ for service, ports in port_map.items():
961+ closed_ports = set(ports).intersection(not_opened)
962+ if closed_ports:
963+ map_not_open[service] = closed_ports
964+ # find which service has missing ports. They are in service
965+ # order which makes it a bit easier.
966+ message = (
967+ "Services with ports not open that should be: {}"
968+ .format(
969+ ", ".join([
970+ "{}: [{}]".format(
971+ service,
972+ ", ".join([str(v) for v in ports]))
973+ for service, ports in map_not_open.items()])))
974+ state = 'blocked'
975+
976+ if ports is not None and state == 'active':
977+ # and we can also check ports which we don't know the service for
978+ ports_open = [port_has_listener('0.0.0.0', p) for p in ports]
979+ if not all(ports_open):
980+ message = (
981+ "Ports which should be open, but are not: {}"
982+ .format(", ".join([str(p) for p, v in zip(ports, ports_open)
983+ if not v])))
984+ state = 'blocked'
985+
986 # Set to active if all requirements have been met
987 if state == 'active':
988 message = "Unit is ready"
989
990=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
991--- hooks/charmhelpers/contrib/python/packages.py 2015-11-11 14:57:38 +0000
992+++ hooks/charmhelpers/contrib/python/packages.py 2016-02-22 12:50:59 +0000
993@@ -19,20 +19,35 @@
994
995 import os
996 import subprocess
997+import sys
998
999 from charmhelpers.fetch import apt_install, apt_update
1000 from charmhelpers.core.hookenv import charm_dir, log
1001
1002-try:
1003- from pip import main as pip_execute
1004-except ImportError:
1005- apt_update()
1006- apt_install('python-pip')
1007- from pip import main as pip_execute
1008-
1009 __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
1010
1011
1012+def pip_execute(*args, **kwargs):
1013+ """Overriden pip_execute() to stop sys.path being changed.
1014+
1015+ The act of importing main from the pip module seems to cause add wheels
1016+ from the /usr/share/python-wheels which are installed by various tools.
1017+ This function ensures that sys.path remains the same after the call is
1018+ executed.
1019+ """
1020+ try:
1021+ _path = sys.path
1022+ try:
1023+ from pip import main as _pip_execute
1024+ except ImportError:
1025+ apt_update()
1026+ apt_install('python-pip')
1027+ from pip import main as _pip_execute
1028+ _pip_execute(*args, **kwargs)
1029+ finally:
1030+ sys.path = _path
1031+
1032+
1033 def parse_options(given, available):
1034 """Given a set of options, check if available"""
1035 for key, value in sorted(given.items()):
1036@@ -42,8 +57,12 @@
1037 yield "--{0}={1}".format(key, value)
1038
1039
1040-def pip_install_requirements(requirements, **options):
1041- """Install a requirements file """
1042+def pip_install_requirements(requirements, constraints=None, **options):
1043+ """Install a requirements file.
1044+
1045+ :param constraints: Path to pip constraints file.
1046+ http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
1047+ """
1048 command = ["install"]
1049
1050 available_options = ('proxy', 'src', 'log', )
1051@@ -51,8 +70,13 @@
1052 command.append(option)
1053
1054 command.append("-r {0}".format(requirements))
1055- log("Installing from file: {} with options: {}".format(requirements,
1056- command))
1057+ if constraints:
1058+ command.append("-c {0}".format(constraints))
1059+ log("Installing from file: {} with constraints {} "
1060+ "and options: {}".format(requirements, constraints, command))
1061+ else:
1062+ log("Installing from file: {} with options: {}".format(requirements,
1063+ command))
1064 pip_execute(command)
1065
1066
1067
1068=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1069--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-11 14:57:38 +0000
1070+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-02-22 12:50:59 +0000
1071@@ -23,6 +23,8 @@
1072 # James Page <james.page@ubuntu.com>
1073 # Adam Gandelman <adamg@ubuntu.com>
1074 #
1075+import bisect
1076+import six
1077
1078 import os
1079 import shutil
1080@@ -72,6 +74,394 @@
1081 err to syslog = {use_syslog}
1082 clog to syslog = {use_syslog}
1083 """
1084+# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
1085+powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
1086+
1087+
1088+def validator(value, valid_type, valid_range=None):
1089+ """
1090+ Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
1091+ Example input:
1092+ validator(value=1,
1093+ valid_type=int,
1094+ valid_range=[0, 2])
1095+ This says I'm testing value=1. It must be an int inclusive in [0,2]
1096+
1097+ :param value: The value to validate
1098+ :param valid_type: The type that value should be.
1099+ :param valid_range: A range of values that value can assume.
1100+ :return:
1101+ """
1102+ assert isinstance(value, valid_type), "{} is not a {}".format(
1103+ value,
1104+ valid_type)
1105+ if valid_range is not None:
1106+ assert isinstance(valid_range, list), \
1107+ "valid_range must be a list, was given {}".format(valid_range)
1108+ # If we're dealing with strings
1109+ if valid_type is six.string_types:
1110+ assert value in valid_range, \
1111+ "{} is not in the list {}".format(value, valid_range)
1112+ # Integer, float should have a min and max
1113+ else:
1114+ if len(valid_range) != 2:
1115+ raise ValueError(
1116+ "Invalid valid_range list of {} for {}. "
1117+ "List must be [min,max]".format(valid_range, value))
1118+ assert value >= valid_range[0], \
1119+ "{} is less than minimum allowed value of {}".format(
1120+ value, valid_range[0])
1121+ assert value <= valid_range[1], \
1122+ "{} is greater than maximum allowed value of {}".format(
1123+ value, valid_range[1])
1124+
1125+
1126+class PoolCreationError(Exception):
1127+ """
1128+ A custom error to inform the caller that a pool creation failed. Provides an error message
1129+ """
1130+ def __init__(self, message):
1131+ super(PoolCreationError, self).__init__(message)
1132+
1133+
1134+class Pool(object):
1135+ """
1136+ An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
1137+ Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
1138+ """
1139+ def __init__(self, service, name):
1140+ self.service = service
1141+ self.name = name
1142+
1143+ # Create the pool if it doesn't exist already
1144+ # To be implemented by subclasses
1145+ def create(self):
1146+ pass
1147+
1148+ def add_cache_tier(self, cache_pool, mode):
1149+ """
1150+ Adds a new cache tier to an existing pool.
1151+ :param cache_pool: six.string_types. The cache tier pool name to add.
1152+ :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
1153+ :return: None
1154+ """
1155+ # Check the input types and values
1156+ validator(value=cache_pool, valid_type=six.string_types)
1157+ validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
1158+
1159+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
1160+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
1161+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
1162+ check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
1163+
1164+ def remove_cache_tier(self, cache_pool):
1165+ """
1166+ Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
1167+ :param cache_pool: six.string_types. The cache tier pool name to remove.
1168+ :return: None
1169+ """
1170+ # read-only is easy, writeback is much harder
1171+ mode = get_cache_mode(cache_pool)
1172+ if mode == 'readonly':
1173+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
1174+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
1175+
1176+ elif mode == 'writeback':
1177+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
1178+ # Flush the cache and wait for it to return
1179+ check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
1180+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
1181+ check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
1182+
1183+ def get_pgs(self, pool_size):
1184+ """
1185+ :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
1186+ erasure coded pools
1187+ :return: int. The number of pgs to use.
1188+ """
1189+ validator(value=pool_size, valid_type=int)
1190+ osds = get_osds(self.service)
1191+ if not osds:
1192+ # NOTE(james-page): Default to 200 for older ceph versions
1193+ # which don't support OSD query from cli
1194+ return 200
1195+
1196+ # Calculate based on Ceph best practices
1197+ if osds < 5:
1198+ return 128
1199+ elif 5 < osds < 10:
1200+ return 512
1201+ elif 10 < osds < 50:
1202+ return 4096
1203+ else:
1204+ estimate = (osds * 100) / pool_size
1205+ # Return the next nearest power of 2
1206+ index = bisect.bisect_right(powers_of_two, estimate)
1207+ return powers_of_two[index]
1208+
1209+
1210+class ReplicatedPool(Pool):
1211+ def __init__(self, service, name, replicas=2):
1212+ super(ReplicatedPool, self).__init__(service=service, name=name)
1213+ self.replicas = replicas
1214+
1215+ def create(self):
1216+ if not pool_exists(self.service, self.name):
1217+ # Create it
1218+ pgs = self.get_pgs(self.replicas)
1219+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
1220+ try:
1221+ check_call(cmd)
1222+ except CalledProcessError:
1223+ raise
1224+
1225+
1226+# Default jerasure erasure coded pool
1227+class ErasurePool(Pool):
1228+ def __init__(self, service, name, erasure_code_profile="default"):
1229+ super(ErasurePool, self).__init__(service=service, name=name)
1230+ self.erasure_code_profile = erasure_code_profile
1231+
1232+ def create(self):
1233+ if not pool_exists(self.service, self.name):
1234+ # Try to find the erasure profile information so we can properly size the pgs
1235+ erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
1236+
1237+ # Check for errors
1238+ if erasure_profile is None:
1239+ log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
1240+ level=ERROR)
1241+ raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
1242+ if 'k' not in erasure_profile or 'm' not in erasure_profile:
1243+ # Error
1244+ log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
1245+ level=ERROR)
1246+ raise PoolCreationError(
1247+ message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
1248+
1249+ pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
1250+ # Create it
1251+ cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
1252+ 'erasure', self.erasure_code_profile]
1253+ try:
1254+ check_call(cmd)
1255+ except CalledProcessError:
1256+ raise
1257+
1258+ """Get an existing erasure code profile if it already exists.
1259+ Returns json formatted output"""
1260+
1261+
1262+def get_erasure_profile(service, name):
1263+ """
1264+ :param service: six.string_types. The Ceph user name to run the command under
1265+ :param name:
1266+ :return:
1267+ """
1268+ try:
1269+ out = check_output(['ceph', '--id', service,
1270+ 'osd', 'erasure-code-profile', 'get',
1271+ name, '--format=json'])
1272+ return json.loads(out)
1273+ except (CalledProcessError, OSError, ValueError):
1274+ return None
1275+
1276+
1277+def pool_set(service, pool_name, key, value):
1278+ """
1279+ Sets a value for a RADOS pool in ceph.
1280+ :param service: six.string_types. The Ceph user name to run the command under
1281+ :param pool_name: six.string_types
1282+ :param key: six.string_types
1283+ :param value:
1284+ :return: None. Can raise CalledProcessError
1285+ """
1286+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
1287+ try:
1288+ check_call(cmd)
1289+ except CalledProcessError:
1290+ raise
1291+
1292+
1293+def snapshot_pool(service, pool_name, snapshot_name):
1294+ """
1295+ Snapshots a RADOS pool in ceph.
1296+ :param service: six.string_types. The Ceph user name to run the command under
1297+ :param pool_name: six.string_types
1298+ :param snapshot_name: six.string_types
1299+ :return: None. Can raise CalledProcessError
1300+ """
1301+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
1302+ try:
1303+ check_call(cmd)
1304+ except CalledProcessError:
1305+ raise
1306+
1307+
1308+def remove_pool_snapshot(service, pool_name, snapshot_name):
1309+ """
1310+ Remove a snapshot from a RADOS pool in ceph.
1311+ :param service: six.string_types. The Ceph user name to run the command under
1312+ :param pool_name: six.string_types
1313+ :param snapshot_name: six.string_types
1314+ :return: None. Can raise CalledProcessError
1315+ """
1316+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
1317+ try:
1318+ check_call(cmd)
1319+ except CalledProcessError:
1320+ raise
1321+
1322+
1323+# max_bytes should be an int or long
1324+def set_pool_quota(service, pool_name, max_bytes):
1325+ """
1326+ :param service: six.string_types. The Ceph user name to run the command under
1327+ :param pool_name: six.string_types
1328+ :param max_bytes: int or long
1329+ :return: None. Can raise CalledProcessError
1330+ """
1331+ # Set a byte quota on a RADOS pool in ceph.
1332+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
1333+ try:
1334+ check_call(cmd)
1335+ except CalledProcessError:
1336+ raise
1337+
1338+
1339+def remove_pool_quota(service, pool_name):
1340+ """
1341+ Set a byte quota on a RADOS pool in ceph.
1342+ :param service: six.string_types. The Ceph user name to run the command under
1343+ :param pool_name: six.string_types
1344+ :return: None. Can raise CalledProcessError
1345+ """
1346+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
1347+ try:
1348+ check_call(cmd)
1349+ except CalledProcessError:
1350+ raise
1351+
1352+
1353+def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
1354+ data_chunks=2, coding_chunks=1,
1355+ locality=None, durability_estimator=None):
1356+ """
1357+ Create a new erasure code profile if one does not already exist for it. Updates
1358+ the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
1359+ for more details
1360+ :param service: six.string_types. The Ceph user name to run the command under
1361+ :param profile_name: six.string_types
1362+ :param erasure_plugin_name: six.string_types
1363+ :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
1364+ 'room', 'root', 'row'])
1365+ :param data_chunks: int
1366+ :param coding_chunks: int
1367+ :param locality: int
1368+ :param durability_estimator: int
1369+ :return: None. Can raise CalledProcessError
1370+ """
1371+ # Ensure this failure_domain is allowed by Ceph
1372+ validator(failure_domain, six.string_types,
1373+ ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
1374+
1375+ cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
1376+ 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
1377+ 'ruleset_failure_domain=' + failure_domain]
1378+ if locality is not None and durability_estimator is not None:
1379+ raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
1380+
1381+ # Add plugin specific information
1382+ if locality is not None:
1383+ # For local erasure codes
1384+ cmd.append('l=' + str(locality))
1385+ if durability_estimator is not None:
1386+ # For Shec erasure codes
1387+ cmd.append('c=' + str(durability_estimator))
1388+
1389+ if erasure_profile_exists(service, profile_name):
1390+ cmd.append('--force')
1391+
1392+ try:
1393+ check_call(cmd)
1394+ except CalledProcessError:
1395+ raise
1396+
1397+
1398+def rename_pool(service, old_name, new_name):
1399+ """
1400+ Rename a Ceph pool from old_name to new_name
1401+ :param service: six.string_types. The Ceph user name to run the command under
1402+ :param old_name: six.string_types
1403+ :param new_name: six.string_types
1404+ :return: None
1405+ """
1406+ validator(value=old_name, valid_type=six.string_types)
1407+ validator(value=new_name, valid_type=six.string_types)
1408+
1409+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
1410+ check_call(cmd)
1411+
1412+
1413+def erasure_profile_exists(service, name):
1414+ """
1415+ Check to see if an Erasure code profile already exists.
1416+ :param service: six.string_types. The Ceph user name to run the command under
1417+ :param name: six.string_types
1418+ :return: int or None
1419+ """
1420+ validator(value=name, valid_type=six.string_types)
1421+ try:
1422+ check_call(['ceph', '--id', service,
1423+ 'osd', 'erasure-code-profile', 'get',
1424+ name])
1425+ return True
1426+ except CalledProcessError:
1427+ return False
1428+
1429+
1430+def get_cache_mode(service, pool_name):
1431+ """
1432+ Find the current caching mode of the pool_name given.
1433+ :param service: six.string_types. The Ceph user name to run the command under
1434+ :param pool_name: six.string_types
1435+ :return: int or None
1436+ """
1437+ validator(value=service, valid_type=six.string_types)
1438+ validator(value=pool_name, valid_type=six.string_types)
1439+ out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
1440+ try:
1441+ osd_json = json.loads(out)
1442+ for pool in osd_json['pools']:
1443+ if pool['pool_name'] == pool_name:
1444+ return pool['cache_mode']
1445+ return None
1446+ except ValueError:
1447+ raise
1448+
1449+
1450+def pool_exists(service, name):
1451+ """Check to see if a RADOS pool already exists."""
1452+ try:
1453+ out = check_output(['rados', '--id', service,
1454+ 'lspools']).decode('UTF-8')
1455+ except CalledProcessError:
1456+ return False
1457+
1458+ return name in out
1459+
1460+
1461+def get_osds(service):
1462+ """Return a list of all Ceph Object Storage Daemons currently in the
1463+ cluster.
1464+ """
1465+ version = ceph_version()
1466+ if version and version >= '0.56':
1467+ return json.loads(check_output(['ceph', '--id', service,
1468+ 'osd', 'ls',
1469+ '--format=json']).decode('UTF-8'))
1470+
1471+ return None
1472
1473
1474 def install():
1475@@ -101,53 +491,37 @@
1476 check_call(cmd)
1477
1478
1479-def pool_exists(service, name):
1480- """Check to see if a RADOS pool already exists."""
1481- try:
1482- out = check_output(['rados', '--id', service,
1483- 'lspools']).decode('UTF-8')
1484- except CalledProcessError:
1485- return False
1486-
1487- return name in out
1488-
1489-
1490-def get_osds(service):
1491- """Return a list of all Ceph Object Storage Daemons currently in the
1492- cluster.
1493- """
1494- version = ceph_version()
1495- if version and version >= '0.56':
1496- return json.loads(check_output(['ceph', '--id', service,
1497- 'osd', 'ls',
1498- '--format=json']).decode('UTF-8'))
1499-
1500- return None
1501-
1502-
1503-def create_pool(service, name, replicas=3):
1504+def update_pool(client, pool, settings):
1505+ cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
1506+ for k, v in six.iteritems(settings):
1507+ cmd.append(k)
1508+ cmd.append(v)
1509+
1510+ check_call(cmd)
1511+
1512+
1513+def create_pool(service, name, replicas=3, pg_num=None):
1514 """Create a new RADOS pool."""
1515 if pool_exists(service, name):
1516 log("Ceph pool {} already exists, skipping creation".format(name),
1517 level=WARNING)
1518 return
1519
1520- # Calculate the number of placement groups based
1521- # on upstream recommended best practices.
1522- osds = get_osds(service)
1523- if osds:
1524- pgnum = (len(osds) * 100 // replicas)
1525- else:
1526- # NOTE(james-page): Default to 200 for older ceph versions
1527- # which don't support OSD query from cli
1528- pgnum = 200
1529-
1530- cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
1531- check_call(cmd)
1532-
1533- cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
1534- str(replicas)]
1535- check_call(cmd)
1536+ if not pg_num:
1537+ # Calculate the number of placement groups based
1538+ # on upstream recommended best practices.
1539+ osds = get_osds(service)
1540+ if osds:
1541+ pg_num = (len(osds) * 100 // replicas)
1542+ else:
1543+ # NOTE(james-page): Default to 200 for older ceph versions
1544+ # which don't support OSD query from cli
1545+ pg_num = 200
1546+
1547+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
1548+ check_call(cmd)
1549+
1550+ update_pool(service, name, settings={'size': str(replicas)})
1551
1552
1553 def delete_pool(service, name):
1554@@ -202,10 +576,10 @@
1555 log('Created new keyfile at %s.' % keyfile, level=INFO)
1556
1557
1558-def get_ceph_nodes():
1559- """Query named relation 'ceph' to determine current nodes."""
1560+def get_ceph_nodes(relation='ceph'):
1561+ """Query named relation to determine current nodes."""
1562 hosts = []
1563- for r_id in relation_ids('ceph'):
1564+ for r_id in relation_ids(relation):
1565 for unit in related_units(r_id):
1566 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
1567
1568@@ -357,14 +731,14 @@
1569 service_start(svc)
1570
1571
1572-def ensure_ceph_keyring(service, user=None, group=None):
1573+def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
1574 """Ensures a ceph keyring is created for a named service and optionally
1575 ensures user and group ownership.
1576
1577 Returns False if no ceph key is available in relation state.
1578 """
1579 key = None
1580- for rid in relation_ids('ceph'):
1581+ for rid in relation_ids(relation):
1582 for unit in related_units(rid):
1583 key = relation_get('key', rid=rid, unit=unit)
1584 if key:
1585@@ -405,6 +779,7 @@
1586
1587 The API is versioned and defaults to version 1.
1588 """
1589+
1590 def __init__(self, api_version=1, request_id=None):
1591 self.api_version = api_version
1592 if request_id:
1593@@ -413,9 +788,16 @@
1594 self.request_id = str(uuid.uuid1())
1595 self.ops = []
1596
1597- def add_op_create_pool(self, name, replica_count=3):
1598+ def add_op_create_pool(self, name, replica_count=3, pg_num=None):
1599+ """Adds an operation to create a pool.
1600+
1601+ @param pg_num setting: optional setting. If not provided, this value
1602+ will be calculated by the broker based on how many OSDs are in the
1603+ cluster at the time of creation. Note that, if provided, this value
1604+ will be capped at the current available maximum.
1605+ """
1606 self.ops.append({'op': 'create-pool', 'name': name,
1607- 'replicas': replica_count})
1608+ 'replicas': replica_count, 'pg_num': pg_num})
1609
1610 def set_ops(self, ops):
1611 """Set request ops to provided value.
1612@@ -433,8 +815,8 @@
1613 def _ops_equal(self, other):
1614 if len(self.ops) == len(other.ops):
1615 for req_no in range(0, len(self.ops)):
1616- for key in ['replicas', 'name', 'op']:
1617- if self.ops[req_no][key] != other.ops[req_no][key]:
1618+ for key in ['replicas', 'name', 'op', 'pg_num']:
1619+ if self.ops[req_no].get(key) != other.ops[req_no].get(key):
1620 return False
1621 else:
1622 return False
1623@@ -540,7 +922,7 @@
1624 return request
1625
1626
1627-def get_request_states(request):
1628+def get_request_states(request, relation='ceph'):
1629 """Return a dict of requests per relation id with their corresponding
1630 completion state.
1631
1632@@ -552,7 +934,7 @@
1633 """
1634 complete = []
1635 requests = {}
1636- for rid in relation_ids('ceph'):
1637+ for rid in relation_ids(relation):
1638 complete = False
1639 previous_request = get_previous_request(rid)
1640 if request == previous_request:
1641@@ -570,14 +952,14 @@
1642 return requests
1643
1644
1645-def is_request_sent(request):
1646+def is_request_sent(request, relation='ceph'):
1647 """Check to see if a functionally equivalent request has already been sent
1648
1649 Returns True if a similair request has been sent
1650
1651 @param request: A CephBrokerRq object
1652 """
1653- states = get_request_states(request)
1654+ states = get_request_states(request, relation=relation)
1655 for rid in states.keys():
1656 if not states[rid]['sent']:
1657 return False
1658@@ -585,7 +967,7 @@
1659 return True
1660
1661
1662-def is_request_complete(request):
1663+def is_request_complete(request, relation='ceph'):
1664 """Check to see if a functionally equivalent request has already been
1665 completed
1666
1667@@ -593,7 +975,7 @@
1668
1669 @param request: A CephBrokerRq object
1670 """
1671- states = get_request_states(request)
1672+ states = get_request_states(request, relation=relation)
1673 for rid in states.keys():
1674 if not states[rid]['complete']:
1675 return False
1676@@ -643,15 +1025,15 @@
1677 return 'broker-rsp-' + local_unit().replace('/', '-')
1678
1679
1680-def send_request_if_needed(request):
1681+def send_request_if_needed(request, relation='ceph'):
1682 """Send broker request if an equivalent request has not already been sent
1683
1684 @param request: A CephBrokerRq object
1685 """
1686- if is_request_sent(request):
1687+ if is_request_sent(request, relation=relation):
1688 log('Request already sent but not complete, not sending new request',
1689 level=DEBUG)
1690 else:
1691- for rid in relation_ids('ceph'):
1692+ for rid in relation_ids(relation):
1693 log('Sending request {}'.format(request.request_id), level=DEBUG)
1694 relation_set(relation_id=rid, broker_req=request.request)
1695
1696=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
1697--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-02-19 22:08:13 +0000
1698+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-02-22 12:50:59 +0000
1699@@ -76,3 +76,13 @@
1700 check_call(cmd)
1701
1702 return create_loopback(path)
1703+
1704+
1705+def is_mapped_loopback_device(device):
1706+ """
1707+ Checks if a given device name is an existing/mapped loopback device.
1708+ :param device: str: Full path to the device (eg, /dev/loop1).
1709+ :returns: str: Path to the backing file if is a loopback device
1710+ empty string otherwise
1711+ """
1712+ return loopback_devices().get(device, "")
1713
1714=== modified file 'hooks/charmhelpers/core/hookenv.py'
1715--- hooks/charmhelpers/core/hookenv.py 2015-11-11 14:57:38 +0000
1716+++ hooks/charmhelpers/core/hookenv.py 2016-02-22 12:50:59 +0000
1717@@ -492,7 +492,7 @@
1718
1719 @cached
1720 def peer_relation_id():
1721- '''Get a peer relation id if a peer relation has been joined, else None.'''
1722+ '''Get the peers relation id if a peers relation has been joined, else None.'''
1723 md = metadata()
1724 section = md.get('peers')
1725 if section:
1726@@ -517,12 +517,12 @@
1727 def relation_to_role_and_interface(relation_name):
1728 """
1729 Given the name of a relation, return the role and the name of the interface
1730- that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
1731+ that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
1732
1733 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
1734 """
1735 _metadata = metadata()
1736- for role in ('provides', 'requires', 'peer'):
1737+ for role in ('provides', 'requires', 'peers'):
1738 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
1739 if interface:
1740 return role, interface
1741@@ -534,7 +534,7 @@
1742 """
1743 Given a role and interface name, return a list of relation names for the
1744 current charm that use that interface under that role (where role is one
1745- of ``provides``, ``requires``, or ``peer``).
1746+ of ``provides``, ``requires``, or ``peers``).
1747
1748 :returns: A list of relation names.
1749 """
1750@@ -555,7 +555,7 @@
1751 :returns: A list of relation names.
1752 """
1753 results = []
1754- for role in ('provides', 'requires', 'peer'):
1755+ for role in ('provides', 'requires', 'peers'):
1756 results.extend(role_and_interface_to_relations(role, interface_name))
1757 return results
1758
1759@@ -637,7 +637,7 @@
1760
1761
1762 @cached
1763-def storage_get(attribute="", storage_id=""):
1764+def storage_get(attribute=None, storage_id=None):
1765 """Get storage attributes"""
1766 _args = ['storage-get', '--format=json']
1767 if storage_id:
1768@@ -651,7 +651,7 @@
1769
1770
1771 @cached
1772-def storage_list(storage_name=""):
1773+def storage_list(storage_name=None):
1774 """List the storage IDs for the unit"""
1775 _args = ['storage-list', '--format=json']
1776 if storage_name:
1777@@ -878,6 +878,40 @@
1778 subprocess.check_call(cmd)
1779
1780
1781+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1782+def payload_register(ptype, klass, pid):
1783+ """ is used while a hook is running to let Juju know that a
1784+ payload has been started."""
1785+ cmd = ['payload-register']
1786+ for x in [ptype, klass, pid]:
1787+ cmd.append(x)
1788+ subprocess.check_call(cmd)
1789+
1790+
1791+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1792+def payload_unregister(klass, pid):
1793+ """ is used while a hook is running to let Juju know
1794+ that a payload has been manually stopped. The <class> and <id> provided
1795+ must match a payload that has been previously registered with juju using
1796+ payload-register."""
1797+ cmd = ['payload-unregister']
1798+ for x in [klass, pid]:
1799+ cmd.append(x)
1800+ subprocess.check_call(cmd)
1801+
1802+
1803+@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
1804+def payload_status_set(klass, pid, status):
1805+ """is used to update the current status of a registered payload.
1806+ The <class> and <id> provided must match a payload that has been previously
1807+ registered with juju using payload-register. The <status> must be one of the
1808+ follow: starting, started, stopping, stopped"""
1809+ cmd = ['payload-status-set']
1810+ for x in [klass, pid, status]:
1811+ cmd.append(x)
1812+ subprocess.check_call(cmd)
1813+
1814+
1815 @cached
1816 def juju_version():
1817 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
1818
1819=== modified file 'hooks/charmhelpers/core/host.py'
1820--- hooks/charmhelpers/core/host.py 2015-11-11 14:57:38 +0000
1821+++ hooks/charmhelpers/core/host.py 2016-02-22 12:50:59 +0000
1822@@ -67,10 +67,14 @@
1823 """Pause a system service.
1824
1825 Stop it, and prevent it from starting again at boot."""
1826- stopped = service_stop(service_name)
1827+ stopped = True
1828+ if service_running(service_name):
1829+ stopped = service_stop(service_name)
1830 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1831 sysv_file = os.path.join(initd_dir, service_name)
1832- if os.path.exists(upstart_file):
1833+ if init_is_systemd():
1834+ service('disable', service_name)
1835+ elif os.path.exists(upstart_file):
1836 override_path = os.path.join(
1837 init_dir, '{}.override'.format(service_name))
1838 with open(override_path, 'w') as fh:
1839@@ -78,9 +82,9 @@
1840 elif os.path.exists(sysv_file):
1841 subprocess.check_call(["update-rc.d", service_name, "disable"])
1842 else:
1843- # XXX: Support SystemD too
1844 raise ValueError(
1845- "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1846+ "Unable to detect {0} as SystemD, Upstart {1} or"
1847+ " SysV {2}".format(
1848 service_name, upstart_file, sysv_file))
1849 return stopped
1850
1851@@ -92,7 +96,9 @@
1852 Reenable starting again at boot. Start the service"""
1853 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
1854 sysv_file = os.path.join(initd_dir, service_name)
1855- if os.path.exists(upstart_file):
1856+ if init_is_systemd():
1857+ service('enable', service_name)
1858+ elif os.path.exists(upstart_file):
1859 override_path = os.path.join(
1860 init_dir, '{}.override'.format(service_name))
1861 if os.path.exists(override_path):
1862@@ -100,34 +106,43 @@
1863 elif os.path.exists(sysv_file):
1864 subprocess.check_call(["update-rc.d", service_name, "enable"])
1865 else:
1866- # XXX: Support SystemD too
1867 raise ValueError(
1868- "Unable to detect {0} as either Upstart {1} or SysV {2}".format(
1869+ "Unable to detect {0} as SystemD, Upstart {1} or"
1870+ " SysV {2}".format(
1871 service_name, upstart_file, sysv_file))
1872
1873- started = service_start(service_name)
1874+ started = service_running(service_name)
1875+ if not started:
1876+ started = service_start(service_name)
1877 return started
1878
1879
1880 def service(action, service_name):
1881 """Control a system service"""
1882- cmd = ['service', service_name, action]
1883+ if init_is_systemd():
1884+ cmd = ['systemctl', action, service_name]
1885+ else:
1886+ cmd = ['service', service_name, action]
1887 return subprocess.call(cmd) == 0
1888
1889
1890-def service_running(service):
1891+def service_running(service_name):
1892 """Determine whether a system service is running"""
1893- try:
1894- output = subprocess.check_output(
1895- ['service', service, 'status'],
1896- stderr=subprocess.STDOUT).decode('UTF-8')
1897- except subprocess.CalledProcessError:
1898- return False
1899+ if init_is_systemd():
1900+ return service('is-active', service_name)
1901 else:
1902- if ("start/running" in output or "is running" in output):
1903- return True
1904- else:
1905+ try:
1906+ output = subprocess.check_output(
1907+ ['service', service_name, 'status'],
1908+ stderr=subprocess.STDOUT).decode('UTF-8')
1909+ except subprocess.CalledProcessError:
1910 return False
1911+ else:
1912+ if ("start/running" in output or "is running" in output or
1913+ "up and running" in output):
1914+ return True
1915+ else:
1916+ return False
1917
1918
1919 def service_available(service_name):
1920@@ -142,8 +157,29 @@
1921 return True
1922
1923
1924-def adduser(username, password=None, shell='/bin/bash', system_user=False):
1925- """Add a user to the system"""
1926+SYSTEMD_SYSTEM = '/run/systemd/system'
1927+
1928+
1929+def init_is_systemd():
1930+ """Return True if the host system uses systemd, False otherwise."""
1931+ return os.path.isdir(SYSTEMD_SYSTEM)
1932+
1933+
1934+def adduser(username, password=None, shell='/bin/bash', system_user=False,
1935+ primary_group=None, secondary_groups=None):
1936+ """Add a user to the system.
1937+
1938+ Will log but otherwise succeed if the user already exists.
1939+
1940+ :param str username: Username to create
1941+ :param str password: Password for user; if ``None``, create a system user
1942+ :param str shell: The default shell for the user
1943+ :param bool system_user: Whether to create a login or system user
1944+ :param str primary_group: Primary group for user; defaults to username
1945+ :param list secondary_groups: Optional list of additional groups
1946+
1947+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
1948+ """
1949 try:
1950 user_info = pwd.getpwnam(username)
1951 log('user {0} already exists!'.format(username))
1952@@ -158,6 +194,16 @@
1953 '--shell', shell,
1954 '--password', password,
1955 ])
1956+ if not primary_group:
1957+ try:
1958+ grp.getgrnam(username)
1959+ primary_group = username # avoid "group exists" error
1960+ except KeyError:
1961+ pass
1962+ if primary_group:
1963+ cmd.extend(['-g', primary_group])
1964+ if secondary_groups:
1965+ cmd.extend(['-G', ','.join(secondary_groups)])
1966 cmd.append(username)
1967 subprocess.check_call(cmd)
1968 user_info = pwd.getpwnam(username)
1969@@ -255,14 +301,12 @@
1970
1971
1972 def fstab_remove(mp):
1973- """Remove the given mountpoint entry from /etc/fstab
1974- """
1975+ """Remove the given mountpoint entry from /etc/fstab"""
1976 return Fstab.remove_by_mountpoint(mp)
1977
1978
1979 def fstab_add(dev, mp, fs, options=None):
1980- """Adds the given device entry to the /etc/fstab file
1981- """
1982+ """Adds the given device entry to the /etc/fstab file"""
1983 return Fstab.add(dev, mp, fs, options=options)
1984
1985
1986@@ -318,8 +362,7 @@
1987
1988
1989 def file_hash(path, hash_type='md5'):
1990- """
1991- Generate a hash checksum of the contents of 'path' or None if not found.
1992+ """Generate a hash checksum of the contents of 'path' or None if not found.
1993
1994 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
1995 such as md5, sha1, sha256, sha512, etc.
1996@@ -334,10 +377,9 @@
1997
1998
1999 def path_hash(path):
2000- """
2001- Generate a hash checksum of all files matching 'path'. Standard wildcards
2002- like '*' and '?' are supported, see documentation for the 'glob' module for
2003- more information.
2004+ """Generate a hash checksum of all files matching 'path'. Standard
2005+ wildcards like '*' and '?' are supported, see documentation for the 'glob'
2006+ module for more information.
2007
2008 :return: dict: A { filename: hash } dictionary for all matched files.
2009 Empty if none found.
2010@@ -349,8 +391,7 @@
2011
2012
2013 def check_hash(path, checksum, hash_type='md5'):
2014- """
2015- Validate a file using a cryptographic checksum.
2016+ """Validate a file using a cryptographic checksum.
2017
2018 :param str checksum: Value of the checksum used to validate the file.
2019 :param str hash_type: Hash algorithm used to generate `checksum`.
2020@@ -365,6 +406,7 @@
2021
2022
2023 class ChecksumError(ValueError):
2024+ """A class derived from Value error to indicate the checksum failed."""
2025 pass
2026
2027
2028@@ -470,7 +512,7 @@
2029
2030
2031 def list_nics(nic_type=None):
2032- '''Return a list of nics of given type(s)'''
2033+ """Return a list of nics of given type(s)"""
2034 if isinstance(nic_type, six.string_types):
2035 int_types = [nic_type]
2036 else:
2037@@ -512,12 +554,13 @@
2038
2039
2040 def set_nic_mtu(nic, mtu):
2041- '''Set MTU on a network interface'''
2042+ """Set the Maximum Transmission Unit (MTU) on a network interface."""
2043 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
2044 subprocess.check_call(cmd)
2045
2046
2047 def get_nic_mtu(nic):
2048+ """Return the Maximum Transmission Unit (MTU) for a network interface."""
2049 cmd = ['ip', 'addr', 'show', nic]
2050 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2051 mtu = ""
2052@@ -529,6 +572,7 @@
2053
2054
2055 def get_nic_hwaddr(nic):
2056+ """Return the Media Access Control (MAC) for a network interface."""
2057 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2058 ip_output = subprocess.check_output(cmd).decode('UTF-8')
2059 hwaddr = ""
2060@@ -539,7 +583,7 @@
2061
2062
2063 def cmp_pkgrevno(package, revno, pkgcache=None):
2064- '''Compare supplied revno with the revno of the installed package
2065+ """Compare supplied revno with the revno of the installed package
2066
2067 * 1 => Installed revno is greater than supplied arg
2068 * 0 => Installed revno is the same as supplied arg
2069@@ -548,7 +592,7 @@
2070 This function imports apt_cache function from charmhelpers.fetch if
2071 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
2072 you call this function, or pass an apt_pkg.Cache() instance.
2073- '''
2074+ """
2075 import apt_pkg
2076 if not pkgcache:
2077 from charmhelpers.fetch import apt_cache
2078@@ -558,19 +602,27 @@
2079
2080
2081 @contextmanager
2082-def chdir(d):
2083+def chdir(directory):
2084+ """Change the current working directory to a different directory for a code
2085+ block and return the previous directory after the block exits. Useful to
2086+ run commands from a specificed directory.
2087+
2088+ :param str directory: The directory path to change to for this context.
2089+ """
2090 cur = os.getcwd()
2091 try:
2092- yield os.chdir(d)
2093+ yield os.chdir(directory)
2094 finally:
2095 os.chdir(cur)
2096
2097
2098 def chownr(path, owner, group, follow_links=True, chowntopdir=False):
2099- """
2100- Recursively change user and group ownership of files and directories
2101+ """Recursively change user and group ownership of files and directories
2102 in given path. Doesn't chown path itself by default, only its children.
2103
2104+ :param str path: The string path to start changing ownership.
2105+ :param str owner: The owner string to use when looking up the uid.
2106+ :param str group: The group string to use when looking up the gid.
2107 :param bool follow_links: Also Chown links if True
2108 :param bool chowntopdir: Also chown path itself if True
2109 """
2110@@ -594,15 +646,23 @@
2111
2112
2113 def lchownr(path, owner, group):
2114+ """Recursively change user and group ownership of files and directories
2115+ in a given path, not following symbolic links. See the documentation for
2116+ 'os.lchown' for more information.
2117+
2118+ :param str path: The string path to start changing ownership.
2119+ :param str owner: The owner string to use when looking up the uid.
2120+ :param str group: The group string to use when looking up the gid.
2121+ """
2122 chownr(path, owner, group, follow_links=False)
2123
2124
2125 def get_total_ram():
2126- '''The total amount of system RAM in bytes.
2127+ """The total amount of system RAM in bytes.
2128
2129 This is what is reported by the OS, and may be overcommitted when
2130 there are multiple containers hosted on the same machine.
2131- '''
2132+ """
2133 with open('/proc/meminfo', 'r') as f:
2134 for line in f.readlines():
2135 if line:
2136
2137=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2138--- hooks/charmhelpers/core/services/helpers.py 2015-11-11 14:57:38 +0000
2139+++ hooks/charmhelpers/core/services/helpers.py 2016-02-22 12:50:59 +0000
2140@@ -243,13 +243,15 @@
2141 :param str source: The template source file, relative to
2142 `$CHARM_DIR/templates`
2143
2144- :param str target: The target to write the rendered template to
2145+ :param str target: The target to write the rendered template to (or None)
2146 :param str owner: The owner of the rendered file
2147 :param str group: The group of the rendered file
2148 :param int perms: The permissions of the rendered file
2149 :param partial on_change_action: functools partial to be executed when
2150 rendered file changes
2151 :param jinja2 loader template_loader: A jinja2 template loader
2152+
2153+ :return str: The rendered template
2154 """
2155 def __init__(self, source, target,
2156 owner='root', group='root', perms=0o444,
2157@@ -267,12 +269,14 @@
2158 if self.on_change_action and os.path.isfile(self.target):
2159 pre_checksum = host.file_hash(self.target)
2160 service = manager.get_service(service_name)
2161- context = {}
2162+ context = {'ctx': {}}
2163 for ctx in service.get('required_data', []):
2164 context.update(ctx)
2165- templating.render(self.source, self.target, context,
2166- self.owner, self.group, self.perms,
2167- template_loader=self.template_loader)
2168+ context['ctx'].update(ctx)
2169+
2170+ result = templating.render(self.source, self.target, context,
2171+ self.owner, self.group, self.perms,
2172+ template_loader=self.template_loader)
2173 if self.on_change_action:
2174 if pre_checksum == host.file_hash(self.target):
2175 hookenv.log(
2176@@ -281,6 +285,8 @@
2177 else:
2178 self.on_change_action()
2179
2180+ return result
2181+
2182
2183 # Convenience aliases for templates
2184 render_template = template = TemplateCallback
2185
2186=== modified file 'hooks/charmhelpers/core/templating.py'
2187--- hooks/charmhelpers/core/templating.py 2015-11-11 14:57:38 +0000
2188+++ hooks/charmhelpers/core/templating.py 2016-02-22 12:50:59 +0000
2189@@ -27,7 +27,8 @@
2190
2191 The `source` path, if not absolute, is relative to the `templates_dir`.
2192
2193- The `target` path should be absolute.
2194+ The `target` path should be absolute. It can also be `None`, in which
2195+ case no file will be written.
2196
2197 The context should be a dict containing the values to be replaced in the
2198 template.
2199@@ -36,6 +37,9 @@
2200
2201 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
2202
2203+ The rendered template will be written to the file as well as being returned
2204+ as a string.
2205+
2206 Note: Using this requires python-jinja2; if it is not installed, calling
2207 this will attempt to use charmhelpers.fetch.apt_install to install it.
2208 """
2209@@ -67,9 +71,11 @@
2210 level=hookenv.ERROR)
2211 raise e
2212 content = template.render(context)
2213- target_dir = os.path.dirname(target)
2214- if not os.path.exists(target_dir):
2215- # This is a terrible default directory permission, as the file
2216- # or its siblings will often contain secrets.
2217- host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
2218- host.write_file(target, content.encode(encoding), owner, group, perms)
2219+ if target is not None:
2220+ target_dir = os.path.dirname(target)
2221+ if not os.path.exists(target_dir):
2222+ # This is a terrible default directory permission, as the file
2223+ # or its siblings will often contain secrets.
2224+ host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
2225+ host.write_file(target, content.encode(encoding), owner, group, perms)
2226+ return content
2227
2228=== modified file 'hooks/charmhelpers/fetch/__init__.py'
2229--- hooks/charmhelpers/fetch/__init__.py 2015-11-11 14:57:38 +0000
2230+++ hooks/charmhelpers/fetch/__init__.py 2016-02-22 12:50:59 +0000
2231@@ -98,6 +98,14 @@
2232 'liberty/proposed': 'trusty-proposed/liberty',
2233 'trusty-liberty/proposed': 'trusty-proposed/liberty',
2234 'trusty-proposed/liberty': 'trusty-proposed/liberty',
2235+ # Mitaka
2236+ 'mitaka': 'trusty-updates/mitaka',
2237+ 'trusty-mitaka': 'trusty-updates/mitaka',
2238+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
2239+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
2240+ 'mitaka/proposed': 'trusty-proposed/mitaka',
2241+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
2242+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
2243 }
2244
2245 # The order of this list is very important. Handlers should be listed in from
2246@@ -411,7 +419,7 @@
2247 importlib.import_module(package),
2248 classname)
2249 plugin_list.append(handler_class())
2250- except (ImportError, AttributeError):
2251+ except NotImplementedError:
2252 # Skip missing plugins so that they can be ommitted from
2253 # installation if desired
2254 log("FetchHandler {} not found, skipping plugin".format(
2255
2256=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
2257--- hooks/charmhelpers/fetch/archiveurl.py 2015-07-22 12:10:31 +0000
2258+++ hooks/charmhelpers/fetch/archiveurl.py 2016-02-22 12:50:59 +0000
2259@@ -108,7 +108,7 @@
2260 install_opener(opener)
2261 response = urlopen(source)
2262 try:
2263- with open(dest, 'w') as dest_file:
2264+ with open(dest, 'wb') as dest_file:
2265 dest_file.write(response.read())
2266 except Exception as e:
2267 if os.path.isfile(dest):
2268
2269=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
2270--- hooks/charmhelpers/fetch/bzrurl.py 2015-02-19 22:08:13 +0000
2271+++ hooks/charmhelpers/fetch/bzrurl.py 2016-02-22 12:50:59 +0000
2272@@ -15,60 +15,50 @@
2273 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2274
2275 import os
2276+from subprocess import check_call
2277 from charmhelpers.fetch import (
2278 BaseFetchHandler,
2279- UnhandledSource
2280+ UnhandledSource,
2281+ filter_installed_packages,
2282+ apt_install,
2283 )
2284 from charmhelpers.core.host import mkdir
2285
2286-import six
2287-if six.PY3:
2288- raise ImportError('bzrlib does not support Python3')
2289
2290-try:
2291- from bzrlib.branch import Branch
2292- from bzrlib import bzrdir, workingtree, errors
2293-except ImportError:
2294- from charmhelpers.fetch import apt_install
2295- apt_install("python-bzrlib")
2296- from bzrlib.branch import Branch
2297- from bzrlib import bzrdir, workingtree, errors
2298+if filter_installed_packages(['bzr']) != []:
2299+ apt_install(['bzr'])
2300+ if filter_installed_packages(['bzr']) != []:
2301+ raise NotImplementedError('Unable to install bzr')
2302
2303
2304 class BzrUrlFetchHandler(BaseFetchHandler):
2305 """Handler for bazaar branches via generic and lp URLs"""
2306 def can_handle(self, source):
2307 url_parts = self.parse_url(source)
2308- if url_parts.scheme not in ('bzr+ssh', 'lp'):
2309+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
2310 return False
2311+ elif not url_parts.scheme:
2312+ return os.path.exists(os.path.join(source, '.bzr'))
2313 else:
2314 return True
2315
2316 def branch(self, source, dest):
2317- url_parts = self.parse_url(source)
2318- # If we use lp:branchname scheme we need to load plugins
2319 if not self.can_handle(source):
2320 raise UnhandledSource("Cannot handle {}".format(source))
2321- if url_parts.scheme == "lp":
2322- from bzrlib.plugin import load_plugins
2323- load_plugins()
2324- try:
2325- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
2326- except errors.AlreadyControlDirError:
2327- local_branch = Branch.open(dest)
2328- try:
2329- remote_branch = Branch.open(source)
2330- remote_branch.push(local_branch)
2331- tree = workingtree.WorkingTree.open(dest)
2332- tree.update()
2333- except Exception as e:
2334- raise e
2335+ if os.path.exists(dest):
2336+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
2337+ else:
2338+ check_call(['bzr', 'branch', source, dest])
2339
2340- def install(self, source):
2341+ def install(self, source, dest=None):
2342 url_parts = self.parse_url(source)
2343 branch_name = url_parts.path.strip("/").split("/")[-1]
2344- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2345- branch_name)
2346+ if dest:
2347+ dest_dir = os.path.join(dest, branch_name)
2348+ else:
2349+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2350+ branch_name)
2351+
2352 if not os.path.exists(dest_dir):
2353 mkdir(dest_dir, perms=0o755)
2354 try:
2355
2356=== modified file 'hooks/charmhelpers/fetch/giturl.py'
2357--- hooks/charmhelpers/fetch/giturl.py 2015-07-22 12:10:31 +0000
2358+++ hooks/charmhelpers/fetch/giturl.py 2016-02-22 12:50:59 +0000
2359@@ -15,24 +15,18 @@
2360 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2361
2362 import os
2363+from subprocess import check_call, CalledProcessError
2364 from charmhelpers.fetch import (
2365 BaseFetchHandler,
2366- UnhandledSource
2367+ UnhandledSource,
2368+ filter_installed_packages,
2369+ apt_install,
2370 )
2371-from charmhelpers.core.host import mkdir
2372-
2373-import six
2374-if six.PY3:
2375- raise ImportError('GitPython does not support Python 3')
2376-
2377-try:
2378- from git import Repo
2379-except ImportError:
2380- from charmhelpers.fetch import apt_install
2381- apt_install("python-git")
2382- from git import Repo
2383-
2384-from git.exc import GitCommandError # noqa E402
2385+
2386+if filter_installed_packages(['git']) != []:
2387+ apt_install(['git'])
2388+ if filter_installed_packages(['git']) != []:
2389+ raise NotImplementedError('Unable to install git')
2390
2391
2392 class GitUrlFetchHandler(BaseFetchHandler):
2393@@ -40,19 +34,24 @@
2394 def can_handle(self, source):
2395 url_parts = self.parse_url(source)
2396 # TODO (mattyw) no support for ssh git@ yet
2397- if url_parts.scheme not in ('http', 'https', 'git'):
2398+ if url_parts.scheme not in ('http', 'https', 'git', ''):
2399 return False
2400+ elif not url_parts.scheme:
2401+ return os.path.exists(os.path.join(source, '.git'))
2402 else:
2403 return True
2404
2405- def clone(self, source, dest, branch, depth=None):
2406+ def clone(self, source, dest, branch="master", depth=None):
2407 if not self.can_handle(source):
2408 raise UnhandledSource("Cannot handle {}".format(source))
2409
2410- if depth:
2411- Repo.clone_from(source, dest, branch=branch, depth=depth)
2412+ if os.path.exists(dest):
2413+ cmd = ['git', '-C', dest, 'pull', source, branch]
2414 else:
2415- Repo.clone_from(source, dest, branch=branch)
2416+ cmd = ['git', 'clone', source, dest, '--branch', branch]
2417+ if depth:
2418+ cmd.extend(['--depth', depth])
2419+ check_call(cmd)
2420
2421 def install(self, source, branch="master", dest=None, depth=None):
2422 url_parts = self.parse_url(source)
2423@@ -62,11 +61,9 @@
2424 else:
2425 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2426 branch_name)
2427- if not os.path.exists(dest_dir):
2428- mkdir(dest_dir, perms=0o755)
2429 try:
2430 self.clone(source, dest_dir, branch, depth)
2431- except GitCommandError as e:
2432+ except CalledProcessError as e:
2433 raise UnhandledSource(e)
2434 except OSError as e:
2435 raise UnhandledSource(e.strerror)
2436
2437=== modified file 'hooks/odl_controller_hooks.py'
2438--- hooks/odl_controller_hooks.py 2015-11-04 19:14:40 +0000
2439+++ hooks/odl_controller_hooks.py 2016-02-22 12:50:59 +0000
2440@@ -18,48 +18,57 @@
2441 adduser,
2442 mkdir,
2443 restart_on_change,
2444- service_start
2445+ service_start,
2446+ service,
2447+ init_is_systemd,
2448 )
2449
2450 from charmhelpers.fetch import (
2451 configure_sources, apt_install, install_remote)
2452
2453-from odl_controller_utils import write_mvn_config, process_odl_cmds
2454-from odl_controller_utils import PROFILES
2455+from odl_controller_utils import (
2456+ write_mvn_config,
2457+ process_odl_cmds,
2458+ PROFILES,
2459+ configure_admin_user,
2460+ admin_user_password,
2461+ assess_status
2462+)
2463
2464 PACKAGES = ["default-jre-headless", "python-jinja2"]
2465 KARAF_PACKAGE = "opendaylight-karaf"
2466
2467 hooks = Hooks()
2468-config = config()
2469
2470
2471 @hooks.hook("config-changed")
2472 @restart_on_change({"/home/opendaylight/.m2/settings.xml": ["odl-controller"]})
2473 def config_changed():
2474- process_odl_cmds(PROFILES[config["profile"]])
2475+ process_odl_cmds(PROFILES[config("profile")])
2476+ write_mvn_config()
2477+ configure_admin_user()
2478 for r_id in relation_ids("controller-api"):
2479 controller_api_joined(r_id)
2480- write_mvn_config()
2481
2482
2483 @hooks.hook("controller-api-relation-joined")
2484 def controller_api_joined(r_id=None):
2485 relation_set(relation_id=r_id,
2486- port=PROFILES[config["profile"]]["port"],
2487- username="admin", password="admin")
2488+ port=PROFILES[config("profile")]["port"],
2489+ username="admin",
2490+ password=admin_user_password())
2491
2492
2493 @hooks.hook()
2494 def install():
2495- if config.get("install-sources"):
2496+ if config("install-sources"):
2497 configure_sources(update=True, sources_var="install-sources",
2498 keys_var="install-keys")
2499
2500 # install packages
2501 apt_install(PACKAGES, fatal=True)
2502
2503- install_url = config["install-url"]
2504+ install_url = config("install-url")
2505 if install_url:
2506 # install opendaylight from tarball
2507
2508@@ -76,7 +85,12 @@
2509 apt_install([KARAF_PACKAGE], fatal=True)
2510 install_dir_name = "opendaylight-karaf"
2511
2512- shutil.copy("files/odl-controller.conf", "/etc/init")
2513+ if init_is_systemd():
2514+ shutil.copy("files/odl-controller.service", "/lib/systemd/system")
2515+ service('enable', 'odl-controller')
2516+ else:
2517+ shutil.copy("files/odl-controller.conf", "/etc/init")
2518+
2519 adduser("opendaylight", system_user=True)
2520 mkdir("/home/opendaylight", owner="opendaylight", group="opendaylight",
2521 perms=0755)
2522@@ -96,6 +110,7 @@
2523 hooks.execute(sys.argv)
2524 except UnregisteredHookError as e:
2525 log("Unknown hook {} - skipping.".format(e))
2526+ assess_status()
2527
2528
2529 @hooks.hook("ovsdb-manager-relation-joined")
2530
2531=== modified file 'hooks/odl_controller_utils.py'
2532--- hooks/odl_controller_utils.py 2016-02-19 14:23:05 +0000
2533+++ hooks/odl_controller_utils.py 2016-02-22 12:50:59 +0000
2534@@ -3,9 +3,12 @@
2535 import urlparse
2536
2537 from charmhelpers.core.templating import render
2538-from charmhelpers.core.hookenv import config
2539+from charmhelpers.core.hookenv import config, status_set
2540 from charmhelpers.core.decorators import retry_on_exception
2541+from charmhelpers.core.host import pwgen, service_running
2542+from charmhelpers.core.unitdata import kv
2543
2544+from odl_helper import ODLAccountHelper, ODLInteractionError
2545
2546 PROFILES = {
2547 "cisco-vpp": {
2548@@ -18,7 +21,8 @@
2549 "port": 8181
2550 },
2551 "openvswitch-odl": {
2552- "feature:install": ["odl-base-all", "odl-aaa-authn",
2553+ "feature:install": ["odl-base-all",
2554+ "odl-aaa-api", "odl-aaa-authn",
2555 "odl-restconf", "odl-nsf-all",
2556 "odl-adsal-northbound",
2557 "odl-mdsal-apidocs",
2558@@ -28,22 +32,29 @@
2559 "port": 8080
2560 },
2561 "openvswitch-odl-lithium": {
2562- "feature:install": ["odl-ovsdb-openstack"],
2563+ "feature:install": ["odl-ovsdb-openstack",
2564+ "odl-restconf",
2565+ "odl-aaa-api",
2566+ "odl-aaa-authn",
2567+ "odl-dlux-core"],
2568 "port": 8080
2569 },
2570 "openvswitch-odl-beryllium": {
2571 "feature:install": ["odl-ovsdb-openstack",
2572 "odl-restconf",
2573+ "odl-aaa-api",
2574 "odl-aaa-authn",
2575 "odl-dlux-all"],
2576 "port": 8080
2577 },
2578 "openvswitch-odl-beryllium-l3": {
2579- "feature:install": ["odl-ovsdb-openstack"],
2580+ "feature:install": ["odl-ovsdb-openstack"
2581+ "odl-aaa-api"],
2582 "port": 8080
2583 },
2584 "openvswitch-odl-beryllium-sfc": {
2585 "feature:install": ["odl-ovsdb-openstack",
2586+ "odl-aaa-api",
2587 "odl-sfc-core",
2588 "odl-sfc-sb-rest",
2589 "odl-sfc-ui",
2590@@ -55,6 +66,7 @@
2591 },
2592 "openvswitch-odl-beryllium-vpn": {
2593 "feature:install": ["odl-ovsdb-openstack",
2594+ "odl-aaa-api",
2595 "odl-vpnservice-api",
2596 "odl-vpnservice-impl",
2597 "odl-vpnservice-impl-rest",
2598@@ -140,9 +152,50 @@
2599 def process_odl_cmds(odl_cmds):
2600 features = filter_installed(odl_cmds.get("feature:install", []))
2601 if features:
2602+ status_set('maintenance', 'Installing requested features into karaf')
2603 run_odl(["feature:install"] + features)
2604 logging = odl_cmds.get("log:set")
2605 if logging:
2606 for log_level in logging.keys():
2607 for target in logging[log_level]:
2608 run_odl(["log:set", log_level, target])
2609+
2610+
2611+DEFAULT_PASSWORD = 'admin'
2612+
2613+
2614+@retry_on_exception(5, base_delay=20, exc_type=ODLInteractionError)
2615+def configure_admin_user():
2616+ '''Configure password for admin user based on provided configuration'''
2617+ db = kv()
2618+ configured_admin_password = config('admin-password')
2619+ current_admin_password = db.get('admin-password',
2620+ default=DEFAULT_PASSWORD)
2621+
2622+ new_password = None
2623+ if (not current_admin_password.startswith('autogen(') and
2624+ not configured_admin_password):
2625+ # Auto generated password - can be overridden with a config-changed
2626+ new_password = "autogen({})".format(pwgen(length=32))
2627+ elif current_admin_password != configured_admin_password:
2628+ # Use configured admin password (new or changed)
2629+ new_password = configured_admin_password
2630+
2631+ if new_password:
2632+ status_set('maintenance', 'Updating password for admin user')
2633+ helper = ODLAccountHelper(password=current_admin_password)
2634+ helper.update_user(user='admin', password=new_password)
2635+ db.set('admin-password', new_password)
2636+ db.flush()
2637+
2638+
2639+def admin_user_password():
2640+ return kv().get('admin-password', default=DEFAULT_PASSWORD)
2641+
2642+
2643+def assess_status():
2644+ '''Determine status of current unit'''
2645+ if service_running('odl-controller'):
2646+ status_set('active', 'Unit is ready')
2647+ else:
2648+ status_set('blocked', 'OpenDayLight is not running')
2649
2650=== added file 'hooks/odl_helper.py'
2651--- hooks/odl_helper.py 1970-01-01 00:00:00 +0000
2652+++ hooks/odl_helper.py 2016-02-22 12:50:59 +0000
2653@@ -0,0 +1,96 @@
2654+import requests
2655+import json
2656+
2657+
2658+class ODLInteractionError(Exception):
2659+ '''Generic exception for failures with interacting with ODL'''
2660+ pass
2661+
2662+
2663+class ODLAccountHelper(object):
2664+ '''Helper class for interacting with the ODL AAA REST API'''
2665+
2666+ def __init__(self, host='localhost', port=8181,
2667+ user='admin', password='admin',
2668+ verify=False):
2669+ super(ODLAccountHelper, self).__init__()
2670+ self.host = host
2671+ self.port = port
2672+ self.user = user
2673+ self.password = password
2674+ self.session = requests.Session()
2675+ self.auth = requests.auth.HTTPBasicAuth(username=self.user,
2676+ password=self.password)
2677+ self.odl_base_url = (
2678+ "http://{}:{}/auth/v1".format(host, port)
2679+ )
2680+ self.session.verify = verify
2681+ self.session.headers.update({"content-type": "application/json"})
2682+
2683+ def get_users(self):
2684+ '''Retrieve a list of all users'''
2685+ response = self.session.get(url="{}/users".format(self.odl_base_url),
2686+ auth=self.auth)
2687+ if response.status_code not in [requests.codes.ok,
2688+ requests.codes.no_content]:
2689+ raise ODLInteractionError('Unable to retrieve users: {}'
2690+ .format(response.reason))
2691+ return json.loads(response.content)['users']
2692+
2693+ def get_user(self, user):
2694+ '''Retrieve details of a user'''
2695+ _users = self.get_users()
2696+ for _user in _users:
2697+ if _user['name'] == user:
2698+ return _user
2699+ raise ValueError('User {} does not found'.format(user))
2700+
2701+ def user_exists(self, user):
2702+ '''Determine if a user exists'''
2703+ try:
2704+ return self.get_user(user) is not None
2705+ except ValueError:
2706+ return False
2707+
2708+ def update_user(self, user, password, description=None,
2709+ email=None, enabled=True):
2710+ '''Update an existing user'''
2711+ user = self.get_user(user)
2712+ user['password'] = password
2713+ if description:
2714+ user['description'] = description
2715+ if email:
2716+ user['email'] = email
2717+ user['enabled'] = enabled
2718+ response = self.session.put(url="{}/users/{}".format(self.odl_base_url,
2719+ user['userid']),
2720+ data=json.dumps(user),
2721+ auth=self.auth)
2722+
2723+ if response.status_code not in [requests.codes.ok,
2724+ requests.codes.no_content]:
2725+ raise ODLInteractionError('Unable to update user: {}'
2726+ .format(response.reason))
2727+
2728+ if user == self.user:
2729+ self.password = password
2730+ return response.content
2731+
2732+ def create_user(self, user, password, description=None,
2733+ email=None, enabled=True):
2734+ if self.user_exists(user):
2735+ raise ValueError('User {} already exists'.format(user))
2736+ user = {
2737+ 'name': user,
2738+ 'password': password,
2739+ 'description': description or '',
2740+ 'email': email or '',
2741+ 'enabled': enabled,
2742+ }
2743+ response = self.session.post(url="{}/users".format(self.odl_base_url),
2744+ auth=self.auth,
2745+ data=json.dumps(user))
2746+ if response.status_code not in [requests.codes.created]:
2747+ raise ODLInteractionError('Unable to create user: {}'
2748+ .format(response.reason))
2749+ return response.content
2750
2751=== added symlink 'hooks/update-status'
2752=== target is u'odl_controller_hooks.py'
2753=== modified file 'tests/015-basic-trusty-icehouse'
2754--- tests/015-basic-trusty-icehouse 2015-11-11 14:57:38 +0000
2755+++ tests/015-basic-trusty-icehouse 2016-02-22 12:50:59 +0000
2756@@ -1,4 +1,4 @@
2757-#!/usr/bin/python
2758+#!/usr/bin/env python
2759
2760 """Amulet tests on a basic odl controller deployment on trusty-icehouse."""
2761
2762
2763=== modified file 'tests/016-basic-trusty-juno'
2764--- tests/016-basic-trusty-juno 2015-11-11 14:57:38 +0000
2765+++ tests/016-basic-trusty-juno 2016-02-22 12:50:59 +0000
2766@@ -1,4 +1,4 @@
2767-#!/usr/bin/python
2768+#!/usr/bin/env python
2769
2770 """Amulet tests on a basic odl controller deployment on trusty-juno."""
2771
2772
2773=== modified file 'tests/017-basic-trusty-kilo'
2774--- tests/017-basic-trusty-kilo 2015-11-11 14:57:38 +0000
2775+++ tests/017-basic-trusty-kilo 2016-02-22 12:50:59 +0000
2776@@ -1,4 +1,4 @@
2777-#!/usr/bin/python
2778+#!/usr/bin/env python
2779
2780 """Amulet tests on a basic odl controller deployment on trusty-kilo."""
2781
2782
2783=== modified file 'tests/018-basic-trusty-liberty'
2784--- tests/018-basic-trusty-liberty 2015-11-11 19:54:58 +0000
2785+++ tests/018-basic-trusty-liberty 2016-02-22 12:50:59 +0000
2786@@ -1,4 +1,4 @@
2787-#!/usr/bin/python
2788+#!/usr/bin/env python
2789
2790 """Amulet tests on a basic odl controller deployment on trusty-liberty."""
2791
2792
2793=== modified file 'tests/019-basic-trusty-mitaka' (properties changed: -x to +x)
2794--- tests/019-basic-trusty-mitaka 2016-01-19 12:47:39 +0000
2795+++ tests/019-basic-trusty-mitaka 2016-02-22 12:50:59 +0000
2796@@ -1,4 +1,4 @@
2797-#!/usr/bin/python
2798+#!/usr/bin/env python
2799
2800 """Amulet tests on a basic odl controller deployment on trusty-mitaka."""
2801
2802
2803=== modified file 'tests/020-basic-wily-liberty' (properties changed: -x to +x)
2804--- tests/020-basic-wily-liberty 2016-01-08 21:45:05 +0000
2805+++ tests/020-basic-wily-liberty 2016-02-22 12:50:59 +0000
2806@@ -1,4 +1,4 @@
2807-#!/usr/bin/python
2808+#!/usr/bin/env python
2809
2810 """Amulet tests on a basic odl controller deployment on wily-liberty."""
2811
2812
2813=== modified file 'tests/021-basic-xenial-mitaka'
2814--- tests/021-basic-xenial-mitaka 2016-01-08 21:45:05 +0000
2815+++ tests/021-basic-xenial-mitaka 2016-02-22 12:50:59 +0000
2816@@ -1,4 +1,4 @@
2817-#!/usr/bin/python
2818+#!/usr/bin/env python
2819
2820 """Amulet tests on a basic odl controller deployment on xenial-mitaka."""
2821
2822
2823=== modified file 'tests/basic_deployment.py'
2824--- tests/basic_deployment.py 2016-02-19 18:08:58 +0000
2825+++ tests/basic_deployment.py 2016-02-22 12:50:59 +0000
2826@@ -1,4 +1,4 @@
2827-#!/usr/bin/python
2828+#!/usr/bin/env python
2829
2830 import amulet
2831 import os
2832@@ -102,7 +102,7 @@
2833 neutron_api_config = {'neutron-security-groups': 'False',
2834 'manage-neutron-plugin-legacy-mode': 'False'}
2835 neutron_api_odl_config = {'overlay-network-type': 'vxlan gre'}
2836- odl_controller_config = {}
2837+ odl_controller_config = {'admin-password': 'testpassword'}
2838 if os.environ.get('AMULET_ODL_LOCATION'):
2839 odl_controller_config['install-url'] = \
2840 os.environ['AMULET_ODL_LOCATION']
2841
2842=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
2843--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 19:54:58 +0000
2844+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-22 12:50:59 +0000
2845@@ -121,11 +121,12 @@
2846
2847 # Charms which should use the source config option
2848 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2849- 'ceph-osd', 'ceph-radosgw']
2850+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
2851
2852 # Charms which can not use openstack-origin, ie. many subordinates
2853 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
2854- 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']
2855+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
2856+ 'cinder-backup']
2857
2858 if self.openstack:
2859 for svc in services:
2860@@ -225,7 +226,8 @@
2861 self.precise_havana, self.precise_icehouse,
2862 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
2863 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
2864- self.wily_liberty) = range(12)
2865+ self.wily_liberty, self.trusty_mitaka,
2866+ self.xenial_mitaka) = range(14)
2867
2868 releases = {
2869 ('precise', None): self.precise_essex,
2870@@ -237,9 +239,11 @@
2871 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
2872 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
2873 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
2874+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
2875 ('utopic', None): self.utopic_juno,
2876 ('vivid', None): self.vivid_kilo,
2877- ('wily', None): self.wily_liberty}
2878+ ('wily', None): self.wily_liberty,
2879+ ('xenial', None): self.xenial_mitaka}
2880 return releases[(self.series, self.openstack)]
2881
2882 def _get_openstack_release_string(self):
2883@@ -256,6 +260,7 @@
2884 ('utopic', 'juno'),
2885 ('vivid', 'kilo'),
2886 ('wily', 'liberty'),
2887+ ('xenial', 'mitaka'),
2888 ])
2889 if self.openstack:
2890 os_origin = self.openstack.split(':')[1]
2891
2892=== modified file 'unit_tests/test_odl_controller_hooks.py'
2893--- unit_tests/test_odl_controller_hooks.py 2015-11-11 14:57:38 +0000
2894+++ unit_tests/test_odl_controller_hooks.py 2016-02-22 12:50:59 +0000
2895@@ -21,6 +21,10 @@
2896 'service_start',
2897 'shutil',
2898 'write_mvn_config',
2899+ 'configure_admin_user',
2900+ 'admin_user_password',
2901+ 'init_is_systemd',
2902+ 'service',
2903 ]
2904
2905
2906@@ -29,11 +33,11 @@
2907 def setUp(self):
2908 super(ODLControllerHooksTests, self).setUp(hooks, TO_PATCH)
2909
2910- self.config.__getitem__.side_effect = self.test_config.get
2911- self.config.get.side_effect = self.test_config.get
2912+ self.config.side_effect = self.test_config.get
2913 self.install_url = 'http://10.10.10.10/distribution-karaf.tgz'
2914 self.test_config.set('install-url', self.install_url)
2915 self.test_config.set('profile', 'default')
2916+ self.init_is_systemd.return_value = False
2917
2918 def _call_hook(self, hookname):
2919 hooks.hooks.execute([
2920@@ -68,11 +72,44 @@
2921 self.shutil.copy.assert_called_with('files/odl-controller.conf',
2922 '/etc/init')
2923
2924+ @patch('os.symlink')
2925+ @patch('os.path.exists')
2926+ @patch('os.listdir')
2927+ def test_install_hook_systemd(self, mock_listdir,
2928+ mock_path_exists, mock_symlink):
2929+ self.init_is_systemd.return_value = True
2930+ mock_listdir.return_value = ['random-file', 'distribution-karaf.tgz']
2931+ mock_path_exists.return_value = False
2932+ self._call_hook('install')
2933+ self.apt_install.assert_called_with([
2934+ "default-jre-headless", "python-jinja2"],
2935+ fatal=True
2936+ )
2937+ mock_symlink.assert_called_with('distribution-karaf.tgz',
2938+ '/opt/opendaylight-karaf')
2939+ self.adduser.assert_called_with("opendaylight", system_user=True)
2940+ self.mkdir.assert_has_calls([
2941+ call('/home/opendaylight', owner="opendaylight",
2942+ group="opendaylight", perms=0755),
2943+ call('/var/log/opendaylight', owner="opendaylight",
2944+ group="opendaylight", perms=0755)
2945+ ])
2946+ self.check_call.assert_called_with([
2947+ "chown", "-R", "opendaylight:opendaylight",
2948+ "/opt/distribution-karaf.tgz"
2949+ ])
2950+ self.write_mvn_config.assert_called_with()
2951+ self.service_start.assert_called_with('odl-controller')
2952+ self.shutil.copy.assert_called_with('files/odl-controller.service',
2953+ '/lib/systemd/system')
2954+ self.service.assert_called_with('enable', 'odl-controller')
2955+
2956 def test_ovsdb_manager_joined_hook(self):
2957 self._call_hook('ovsdb-manager-relation-joined')
2958 self.relation_set.assert_called_with(port=6640, protocol="tcp")
2959
2960 def test_controller_api_relation_joined_hook(self):
2961+ self.admin_user_password.return_value = 'admin'
2962 self._call_hook('controller-api-relation-joined')
2963 self.relation_set.assert_called_with(relation_id=None, port=8080,
2964 username="admin",
2965@@ -93,3 +130,4 @@
2966 ],
2967 'port': 8080
2968 })
2969+ self.assertTrue(self.configure_admin_user.called)
2970
2971=== modified file 'unit_tests/test_odl_controller_utils.py'
2972--- unit_tests/test_odl_controller_utils.py 2015-11-17 16:20:15 +0000
2973+++ unit_tests/test_odl_controller_utils.py 2016-02-22 12:50:59 +0000
2974@@ -1,4 +1,4 @@
2975-from mock import patch, call
2976+from mock import patch, call, MagicMock
2977 from test_utils import CharmTestCase
2978
2979 import odl_controller_utils as utils
2980@@ -9,6 +9,9 @@
2981 'render',
2982 'config',
2983 'retry_on_exception',
2984+ 'kv',
2985+ 'ODLAccountHelper',
2986+ 'pwgen',
2987 ]
2988
2989
2990@@ -91,3 +94,94 @@
2991 call(["feature:install", "odl-l2switch-all"]),
2992 call(['log:set', 'TRACE', 'cosc-cvpn-ovs-rest'])
2993 ])
2994+
2995+ def test_admin_user_password(self):
2996+ db = MagicMock()
2997+ self.kv.return_value = db
2998+ db.get.return_value = 'testpassword'
2999+ self.assertEqual(utils.admin_user_password(), 'testpassword')
3000+ db.get.assert_called_with('admin-password',
3001+ default=utils.DEFAULT_PASSWORD)
3002+
3003+ def test_configure_admin_user_autogen(self):
3004+ self.test_config.set('admin-password', None)
3005+ self.pwgen.return_value = 'newgeneratedpassword'
3006+ helper = MagicMock()
3007+ self.ODLAccountHelper.return_value = helper
3008+ db = MagicMock()
3009+ self.kv.return_value = db
3010+ db.get.return_value = utils.DEFAULT_PASSWORD
3011+ utils.configure_admin_user()
3012+ db.get.assert_called_with('admin-password',
3013+ default=utils.DEFAULT_PASSWORD)
3014+ db.set.assert_called_with('admin-password',
3015+ 'autogen(newgeneratedpassword)')
3016+ helper.update_user.assert_called_with(
3017+ user='admin',
3018+ password='autogen(newgeneratedpassword)'
3019+ )
3020+ self.assertTrue(db.flush.called)
3021+
3022+ def test_configure_admin_user_unset(self):
3023+ self.test_config.set('admin-password', None)
3024+ self.pwgen.return_value = 'newgeneratedpassword'
3025+ helper = MagicMock()
3026+ self.ODLAccountHelper.return_value = helper
3027+ db = MagicMock()
3028+ self.kv.return_value = db
3029+ db.get.return_value = 'manualpassword'
3030+ utils.configure_admin_user()
3031+ db.get.assert_called_with('admin-password',
3032+ default=utils.DEFAULT_PASSWORD)
3033+ db.set.assert_called_with('admin-password',
3034+ 'autogen(newgeneratedpassword)')
3035+ helper.update_user.assert_called_with(
3036+ user='admin',
3037+ password='autogen(newgeneratedpassword)'
3038+ )
3039+ self.assertTrue(db.flush.called)
3040+
3041+ def test_configure_admin_user_nochange_autogen(self):
3042+ self.test_config.set('admin-password', None)
3043+ helper = MagicMock()
3044+ self.ODLAccountHelper.return_value = helper
3045+ db = MagicMock()
3046+ self.kv.return_value = db
3047+ db.get.return_value = 'autogen(generatedpassword)'
3048+ utils.configure_admin_user()
3049+ db.get.assert_called_with('admin-password',
3050+ default=utils.DEFAULT_PASSWORD)
3051+ self.assertFalse(db.set.called)
3052+ self.assertFalse(helper.update_user.called)
3053+ self.assertFalse(db.flush.called)
3054+
3055+ def test_configure_admin_user_nochange_manual(self):
3056+ self.test_config.set('admin-password', 'manualpassword')
3057+ helper = MagicMock()
3058+ self.ODLAccountHelper.return_value = helper
3059+ db = MagicMock()
3060+ self.kv.return_value = db
3061+ db.get.return_value = 'manualpassword'
3062+ utils.configure_admin_user()
3063+ db.get.assert_called_with('admin-password',
3064+ default=utils.DEFAULT_PASSWORD)
3065+ self.assertFalse(db.set.called)
3066+ self.assertFalse(helper.update_user.called)
3067+ self.assertFalse(db.flush.called)
3068+
3069+ def test_configure_admin_user_config(self):
3070+ self.test_config.set('admin-password', 'manualpassword')
3071+ helper = MagicMock()
3072+ self.ODLAccountHelper.return_value = helper
3073+ db = MagicMock()
3074+ self.kv.return_value = db
3075+ db.get.return_value = utils.DEFAULT_PASSWORD
3076+ utils.configure_admin_user()
3077+ self.config.assert_called_with('admin-password')
3078+ db.get.assert_called_with('admin-password',
3079+ default=utils.DEFAULT_PASSWORD)
3080+ db.set.assert_called_with('admin-password',
3081+ 'manualpassword')
3082+ helper.update_user.assert_called_with(user='admin',
3083+ password='manualpassword')
3084+ self.assertTrue(db.flush.called)

Subscribers

People subscribed via source and target branches

to all changes: