Merge lp:~james-page/charms/trusty/odl-controller/productionize into lp:~openstack-charmers-archive/charms/trusty/odl-controller/next

Proposed by James Page
Status: Needs review
Proposed branch: lp:~james-page/charms/trusty/odl-controller/productionize
Merge into: lp:~openstack-charmers-archive/charms/trusty/odl-controller/next
Diff against target: 3084 lines (+1448/-346)
37 files modified
.project (+17/-0)
.pydevproject (+10/-0)
config.yaml (+4/-0)
files/odl-controller.service (+12/-0)
hooks/charmhelpers/contrib/network/ip.py (+36/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+10/-4)
hooks/charmhelpers/contrib/openstack/context.py (+48/-10)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+18/-8)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+19/-11)
hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken (+11/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+218/-69)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+441/-59)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/hookenv.py (+41/-7)
hooks/charmhelpers/core/host.py (+103/-43)
hooks/charmhelpers/core/services/helpers.py (+11/-5)
hooks/charmhelpers/core/templating.py (+13/-7)
hooks/charmhelpers/fetch/__init__.py (+9/-1)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/odl_controller_hooks.py (+26/-11)
hooks/odl_controller_utils.py (+57/-4)
hooks/odl_helper.py (+96/-0)
tests/015-basic-trusty-icehouse (+1/-1)
tests/016-basic-trusty-juno (+1/-1)
tests/017-basic-trusty-kilo (+1/-1)
tests/018-basic-trusty-liberty (+1/-1)
tests/019-basic-trusty-mitaka (+1/-1)
tests/020-basic-wily-liberty (+1/-1)
tests/021-basic-xenial-mitaka (+1/-1)
tests/basic_deployment.py (+2/-2)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-4)
unit_tests/test_odl_controller_hooks.py (+40/-2)
unit_tests/test_odl_controller_utils.py (+95/-1)
To merge this branch: bzr merge lp:~james-page/charms/trusty/odl-controller/productionize
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+286718@code.launchpad.net
To post a comment you must log in.
46. By James Page

Resync helpers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #838 odl-controller-next for james-page mp286718
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/838/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #939 odl-controller-next for james-page mp286718
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/15136684/
Build: http://10.245.162.36:8080/job/charm_lint_check/939/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #384 odl-controller-next for james-page mp286718
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15137306/
Build: http://10.245.162.36:8080/job/charm_amulet_test/384/

47. By James Page

Tidy pep8

48. By James Page

Tidy unit tests

49. By James Page

Update profiles to include aaa-api for password management

50. By James Page

Make venv compat

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #1166 odl-controller-next for james-page mp286718
    LINT OK: passed

Build: http://10.245.162.36:8080/job/charm_lint_check/1166/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #1001 odl-controller-next for james-page mp286718
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.

Full unit test output: http://paste.ubuntu.com/15170675/
Build: http://10.245.162.36:8080/job/charm_unit_test/1001/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #445 odl-controller-next for james-page mp286718
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15171058/
Build: http://10.245.162.36:8080/job/charm_amulet_test/445/

Unmerged revisions

50. By James Page

Make venv compat

49. By James Page

Update profiles to include aaa-api for password management

48. By James Page

Tidy unit tests

47. By James Page

Tidy pep8

46. By James Page

Resync helpers

45. By James Page

Merge opnfv stuff

44. By James Page

Rebase

43. By James Page

Merge amulet test fixes for neutron

42. By James Page

Resync helpers

41. By James Page

Rebase

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file '.project'
--- .project 1970-01-01 00:00:00 +0000
+++ .project 2016-02-22 12:50:59 +0000
@@ -0,0 +1,17 @@
1<?xml version="1.0" encoding="UTF-8"?>
2<projectDescription>
3 <name>odl-controller</name>
4 <comment></comment>
5 <projects>
6 </projects>
7 <buildSpec>
8 <buildCommand>
9 <name>org.python.pydev.PyDevBuilder</name>
10 <arguments>
11 </arguments>
12 </buildCommand>
13 </buildSpec>
14 <natures>
15 <nature>org.python.pydev.pythonNature</nature>
16 </natures>
17</projectDescription>
018
=== added file '.pydevproject'
--- .pydevproject 1970-01-01 00:00:00 +0000
+++ .pydevproject 2016-02-22 12:50:59 +0000
@@ -0,0 +1,10 @@
1<?xml version="1.0" encoding="UTF-8" standalone="no"?>
2<?eclipse-pydev version="1.0"?><pydev_project>
3<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property>
4<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
5<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
6<path>/odl-controller/unit_tests</path>
7<path>/odl-controller/hooks</path>
8<path>/odl-controller/tests</path>
9</pydev_pathproperty>
10</pydev_project>
011
=== modified file 'config.yaml'
--- config.yaml 2016-02-19 16:37:59 +0000
+++ config.yaml 2016-02-22 12:50:59 +0000
@@ -37,3 +37,7 @@
37 type: string37 type: string
38 default: ''38 default: ''
39 description: Proxy to use for https connections for OpenDayLight39 description: Proxy to use for https connections for OpenDayLight
40 admin-password:
41 type: string
42 default:
43 description: Password for admin user; randomly generated if not provided.
4044
=== added file 'files/odl-controller.service'
--- files/odl-controller.service 1970-01-01 00:00:00 +0000
+++ files/odl-controller.service 2016-02-22 12:50:59 +0000
@@ -0,0 +1,12 @@
1[Unit]
2Description=OpenDayLight SDN Controller
3After=network.target
4
5[Service]
6Type=forking
7User=opendaylight
8Group=opendaylight
9ExecStart=/opt/opendaylight-karaf/bin/start
10
11[Install]
12WantedBy=multi-user.target
013
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2016-02-22 12:50:59 +0000
@@ -53,7 +53,7 @@
5353
5454
55def no_ip_found_error_out(network):55def no_ip_found_error_out(network):
56 errmsg = ("No IP address found in network: %s" % network)56 errmsg = ("No IP address found in network(s): %s" % network)
57 raise ValueError(errmsg)57 raise ValueError(errmsg)
5858
5959
@@ -61,7 +61,7 @@
61 """Get an IPv4 or IPv6 address within the network from the host.61 """Get an IPv4 or IPv6 address within the network from the host.
6262
63 :param network (str): CIDR presentation format. For example,63 :param network (str): CIDR presentation format. For example,
64 '192.168.1.0/24'.64 '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
65 :param fallback (str): If no address is found, return fallback.65 :param fallback (str): If no address is found, return fallback.
66 :param fatal (boolean): If no address is found, fallback is not66 :param fatal (boolean): If no address is found, fallback is not
67 set and fatal is True then exit(1).67 set and fatal is True then exit(1).
@@ -75,24 +75,26 @@
75 else:75 else:
76 return None76 return None
7777
78 _validate_cidr(network)78 networks = network.split() or [network]
79 network = netaddr.IPNetwork(network)79 for network in networks:
80 for iface in netifaces.interfaces():80 _validate_cidr(network)
81 addresses = netifaces.ifaddresses(iface)81 network = netaddr.IPNetwork(network)
82 if network.version == 4 and netifaces.AF_INET in addresses:82 for iface in netifaces.interfaces():
83 addr = addresses[netifaces.AF_INET][0]['addr']83 addresses = netifaces.ifaddresses(iface)
84 netmask = addresses[netifaces.AF_INET][0]['netmask']84 if network.version == 4 and netifaces.AF_INET in addresses:
85 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))85 addr = addresses[netifaces.AF_INET][0]['addr']
86 if cidr in network:86 netmask = addresses[netifaces.AF_INET][0]['netmask']
87 return str(cidr.ip)87 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
88 if cidr in network:
89 return str(cidr.ip)
8890
89 if network.version == 6 and netifaces.AF_INET6 in addresses:91 if network.version == 6 and netifaces.AF_INET6 in addresses:
90 for addr in addresses[netifaces.AF_INET6]:92 for addr in addresses[netifaces.AF_INET6]:
91 if not addr['addr'].startswith('fe80'):93 if not addr['addr'].startswith('fe80'):
92 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],94 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
93 addr['netmask']))95 addr['netmask']))
94 if cidr in network:96 if cidr in network:
95 return str(cidr.ip)97 return str(cidr.ip)
9698
97 if fallback is not None:99 if fallback is not None:
98 return fallback100 return fallback
@@ -454,3 +456,18 @@
454 return result456 return result
455 else:457 else:
456 return result.split('.')[0]458 return result.split('.')[0]
459
460
461def port_has_listener(address, port):
462 """
463 Returns True if the address:port is open and being listened to,
464 else False.
465
466 @param address: an IP address or hostname
467 @param port: integer port
468
469 Note calls 'zc' via a subprocess shell
470 """
471 cmd = ['nc', '-z', address, str(port)]
472 result = subprocess.call(cmd)
473 return not(bool(result))
457474
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-22 12:50:59 +0000
@@ -121,10 +121,12 @@
121121
122 # Charms which should use the source config option122 # Charms which should use the source config option
123 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',123 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
124 'ceph-osd', 'ceph-radosgw']124 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
125125
126 # Charms which can not use openstack-origin, ie. many subordinates126 # Charms which can not use openstack-origin, ie. many subordinates
127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
129 'cinder-backup']
128130
129 if self.openstack:131 if self.openstack:
130 for svc in services:132 for svc in services:
@@ -224,7 +226,8 @@
224 self.precise_havana, self.precise_icehouse,226 self.precise_havana, self.precise_icehouse,
225 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,227 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
226 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,228 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
227 self.wily_liberty) = range(12)229 self.wily_liberty, self.trusty_mitaka,
230 self.xenial_mitaka) = range(14)
228231
229 releases = {232 releases = {
230 ('precise', None): self.precise_essex,233 ('precise', None): self.precise_essex,
@@ -236,9 +239,11 @@
236 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,239 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
237 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,240 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
238 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,241 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
242 ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
239 ('utopic', None): self.utopic_juno,243 ('utopic', None): self.utopic_juno,
240 ('vivid', None): self.vivid_kilo,244 ('vivid', None): self.vivid_kilo,
241 ('wily', None): self.wily_liberty}245 ('wily', None): self.wily_liberty,
246 ('xenial', None): self.xenial_mitaka}
242 return releases[(self.series, self.openstack)]247 return releases[(self.series, self.openstack)]
243248
244 def _get_openstack_release_string(self):249 def _get_openstack_release_string(self):
@@ -255,6 +260,7 @@
255 ('utopic', 'juno'),260 ('utopic', 'juno'),
256 ('vivid', 'kilo'),261 ('vivid', 'kilo'),
257 ('wily', 'liberty'),262 ('wily', 'liberty'),
263 ('xenial', 'mitaka'),
258 ])264 ])
259 if self.openstack:265 if self.openstack:
260 os_origin = self.openstack.split(':')[1]266 os_origin = self.openstack.split(':')[1]
261267
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2016-02-22 12:50:59 +0000
@@ -57,6 +57,7 @@
57 get_nic_hwaddr,57 get_nic_hwaddr,
58 mkdir,58 mkdir,
59 write_file,59 write_file,
60 pwgen,
60)61)
61from charmhelpers.contrib.hahelpers.cluster import (62from charmhelpers.contrib.hahelpers.cluster import (
62 determine_apache_port,63 determine_apache_port,
@@ -87,6 +88,14 @@
87 is_bridge_member,88 is_bridge_member,
88)89)
89from charmhelpers.contrib.openstack.utils import get_host_ip90from charmhelpers.contrib.openstack.utils import get_host_ip
91from charmhelpers.core.unitdata import kv
92
93try:
94 import psutil
95except ImportError:
96 apt_install('python-psutil', fatal=True)
97 import psutil
98
90CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'99CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
91ADDRESS_TYPES = ['admin', 'internal', 'public']100ADDRESS_TYPES = ['admin', 'internal', 'public']
92101
@@ -401,6 +410,7 @@
401 auth_host = format_ipv6_addr(auth_host) or auth_host410 auth_host = format_ipv6_addr(auth_host) or auth_host
402 svc_protocol = rdata.get('service_protocol') or 'http'411 svc_protocol = rdata.get('service_protocol') or 'http'
403 auth_protocol = rdata.get('auth_protocol') or 'http'412 auth_protocol = rdata.get('auth_protocol') or 'http'
413 api_version = rdata.get('api_version') or '2.0'
404 ctxt.update({'service_port': rdata.get('service_port'),414 ctxt.update({'service_port': rdata.get('service_port'),
405 'service_host': serv_host,415 'service_host': serv_host,
406 'auth_host': auth_host,416 'auth_host': auth_host,
@@ -409,7 +419,8 @@
409 'admin_user': rdata.get('service_username'),419 'admin_user': rdata.get('service_username'),
410 'admin_password': rdata.get('service_password'),420 'admin_password': rdata.get('service_password'),
411 'service_protocol': svc_protocol,421 'service_protocol': svc_protocol,
412 'auth_protocol': auth_protocol})422 'auth_protocol': auth_protocol,
423 'api_version': api_version})
413424
414 if self.context_complete(ctxt):425 if self.context_complete(ctxt):
415 # NOTE(jamespage) this is required for >= icehouse426 # NOTE(jamespage) this is required for >= icehouse
@@ -626,15 +637,28 @@
626 if config('haproxy-client-timeout'):637 if config('haproxy-client-timeout'):
627 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')638 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
628639
640 if config('haproxy-queue-timeout'):
641 ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
642
643 if config('haproxy-connect-timeout'):
644 ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
645
629 if config('prefer-ipv6'):646 if config('prefer-ipv6'):
630 ctxt['ipv6'] = True647 ctxt['ipv6'] = True
631 ctxt['local_host'] = 'ip6-localhost'648 ctxt['local_host'] = 'ip6-localhost'
632 ctxt['haproxy_host'] = '::'649 ctxt['haproxy_host'] = '::'
633 ctxt['stat_port'] = ':::8888'
634 else:650 else:
635 ctxt['local_host'] = '127.0.0.1'651 ctxt['local_host'] = '127.0.0.1'
636 ctxt['haproxy_host'] = '0.0.0.0'652 ctxt['haproxy_host'] = '0.0.0.0'
637 ctxt['stat_port'] = ':8888'653
654 ctxt['stat_port'] = '8888'
655
656 db = kv()
657 ctxt['stat_password'] = db.get('stat-password')
658 if not ctxt['stat_password']:
659 ctxt['stat_password'] = db.set('stat-password',
660 pwgen(32))
661 db.flush()
638662
639 for frontend in cluster_hosts:663 for frontend in cluster_hosts:
640 if (len(cluster_hosts[frontend]['backends']) > 1 or664 if (len(cluster_hosts[frontend]['backends']) > 1 or
@@ -1088,6 +1112,20 @@
1088 config_flags_parser(config_flags)}1112 config_flags_parser(config_flags)}
10891113
10901114
1115class LibvirtConfigFlagsContext(OSContextGenerator):
1116 """
1117 This context provides support for extending
1118 the libvirt section through user-defined flags.
1119 """
1120 def __call__(self):
1121 ctxt = {}
1122 libvirt_flags = config('libvirt-flags')
1123 if libvirt_flags:
1124 ctxt['libvirt_flags'] = config_flags_parser(
1125 libvirt_flags)
1126 return ctxt
1127
1128
1091class SubordinateConfigContext(OSContextGenerator):1129class SubordinateConfigContext(OSContextGenerator):
10921130
1093 """1131 """
@@ -1228,13 +1266,11 @@
12281266
1229 @property1267 @property
1230 def num_cpus(self):1268 def num_cpus(self):
1231 try:1269 # NOTE: use cpu_count if present (16.04 support)
1232 from psutil import NUM_CPUS1270 if hasattr(psutil, 'cpu_count'):
1233 except ImportError:1271 return psutil.cpu_count()
1234 apt_install('python-psutil', fatal=True)1272 else:
1235 from psutil import NUM_CPUS1273 return psutil.NUM_CPUS
1236
1237 return NUM_CPUS
12381274
1239 def __call__(self):1275 def __call__(self):
1240 multiplier = config('worker-multiplier') or 01276 multiplier = config('worker-multiplier') or 0
@@ -1437,6 +1473,8 @@
1437 rdata.get('service_protocol') or 'http',1473 rdata.get('service_protocol') or 'http',
1438 'auth_protocol':1474 'auth_protocol':
1439 rdata.get('auth_protocol') or 'http',1475 rdata.get('auth_protocol') or 'http',
1476 'api_version':
1477 rdata.get('api_version') or '2.0',
1440 }1478 }
1441 if self.context_complete(ctxt):1479 if self.context_complete(ctxt):
1442 return ctxt1480 return ctxt
14431481
=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-02-22 12:50:59 +0000
@@ -9,15 +9,17 @@
9CRITICAL=09CRITICAL=0
10NOTACTIVE=''10NOTACTIVE=''
11LOGFILE=/var/log/nagios/check_haproxy.log11LOGFILE=/var/log/nagios/check_haproxy.log
12AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')12AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
1313
14for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});14typeset -i N_INSTANCES=0
15for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
15do16do
16 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')17 N_INSTANCES=N_INSTANCES+1
18 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
17 if [ $? != 0 ]; then19 if [ $? != 0 ]; then
18 date >> $LOGFILE20 date >> $LOGFILE
19 echo $output >> $LOGFILE21 echo $output >> $LOGFILE
20 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&122 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
21 CRITICAL=123 CRITICAL=1
22 NOTACTIVE="${NOTACTIVE} $appserver"24 NOTACTIVE="${NOTACTIVE} $appserver"
23 fi25 fi
@@ -28,5 +30,5 @@
28 exit 230 exit 2
29fi31fi
3032
31echo "OK: All haproxy instances looking good"33echo "OK: All haproxy instances ($N_INSTANCES) looking good"
32exit 034exit 0
3335
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-02-22 12:50:59 +0000
@@ -50,7 +50,7 @@
50 if kernel_version() >= (3, 13):50 if kernel_version() >= (3, 13):
51 return []51 return []
52 else:52 else:
53 return ['openvswitch-datapath-dkms']53 return [headers_package(), 'openvswitch-datapath-dkms']
5454
5555
56# legacy56# legacy
@@ -70,7 +70,7 @@
70 relation_prefix='neutron',70 relation_prefix='neutron',
71 ssl_dir=QUANTUM_CONF_DIR)],71 ssl_dir=QUANTUM_CONF_DIR)],
72 'services': ['quantum-plugin-openvswitch-agent'],72 'services': ['quantum-plugin-openvswitch-agent'],
73 'packages': [[headers_package()] + determine_dkms_package(),73 'packages': [determine_dkms_package(),
74 ['quantum-plugin-openvswitch-agent']],74 ['quantum-plugin-openvswitch-agent']],
75 'server_packages': ['quantum-server',75 'server_packages': ['quantum-server',
76 'quantum-plugin-openvswitch'],76 'quantum-plugin-openvswitch'],
@@ -111,7 +111,7 @@
111 relation_prefix='neutron',111 relation_prefix='neutron',
112 ssl_dir=NEUTRON_CONF_DIR)],112 ssl_dir=NEUTRON_CONF_DIR)],
113 'services': ['neutron-plugin-openvswitch-agent'],113 'services': ['neutron-plugin-openvswitch-agent'],
114 'packages': [[headers_package()] + determine_dkms_package(),114 'packages': [determine_dkms_package(),
115 ['neutron-plugin-openvswitch-agent']],115 ['neutron-plugin-openvswitch-agent']],
116 'server_packages': ['neutron-server',116 'server_packages': ['neutron-server',
117 'neutron-plugin-openvswitch'],117 'neutron-plugin-openvswitch'],
@@ -155,7 +155,7 @@
155 relation_prefix='neutron',155 relation_prefix='neutron',
156 ssl_dir=NEUTRON_CONF_DIR)],156 ssl_dir=NEUTRON_CONF_DIR)],
157 'services': [],157 'services': [],
158 'packages': [[headers_package()] + determine_dkms_package(),158 'packages': [determine_dkms_package(),
159 ['neutron-plugin-cisco']],159 ['neutron-plugin-cisco']],
160 'server_packages': ['neutron-server',160 'server_packages': ['neutron-server',
161 'neutron-plugin-cisco'],161 'neutron-plugin-cisco'],
@@ -174,7 +174,7 @@
174 'neutron-dhcp-agent',174 'neutron-dhcp-agent',
175 'nova-api-metadata',175 'nova-api-metadata',
176 'etcd'],176 'etcd'],
177 'packages': [[headers_package()] + determine_dkms_package(),177 'packages': [determine_dkms_package(),
178 ['calico-compute',178 ['calico-compute',
179 'bird',179 'bird',
180 'neutron-dhcp-agent',180 'neutron-dhcp-agent',
@@ -204,8 +204,8 @@
204 database=config('database'),204 database=config('database'),
205 ssl_dir=NEUTRON_CONF_DIR)],205 ssl_dir=NEUTRON_CONF_DIR)],
206 'services': [],206 'services': [],
207 'packages': [['plumgrid-lxc'],207 'packages': ['plumgrid-lxc',
208 ['iovisor-dkms']],208 'iovisor-dkms'],
209 'server_packages': ['neutron-server',209 'server_packages': ['neutron-server',
210 'neutron-plugin-plumgrid'],210 'neutron-plugin-plumgrid'],
211 'server_services': ['neutron-server']211 'server_services': ['neutron-server']
@@ -219,7 +219,7 @@
219 relation_prefix='neutron',219 relation_prefix='neutron',
220 ssl_dir=NEUTRON_CONF_DIR)],220 ssl_dir=NEUTRON_CONF_DIR)],
221 'services': [],221 'services': [],
222 'packages': [[headers_package()] + determine_dkms_package()],222 'packages': [determine_dkms_package()],
223 'server_packages': ['neutron-server',223 'server_packages': ['neutron-server',
224 'python-neutron-plugin-midonet'],224 'python-neutron-plugin-midonet'],
225 'server_services': ['neutron-server']225 'server_services': ['neutron-server']
@@ -233,6 +233,16 @@
233 'neutron-plugin-ml2']233 'neutron-plugin-ml2']
234 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards234 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
235 plugins['nvp'] = plugins['nsx']235 plugins['nvp'] = plugins['nsx']
236 if release >= 'kilo':
237 plugins['midonet']['driver'] = (
238 'neutron.plugins.midonet.plugin.MidonetPluginV2')
239 if release >= 'liberty':
240 plugins['midonet']['driver'] = (
241 'midonet.neutron.plugin_v1.MidonetPluginV2')
242 plugins['midonet']['server_packages'].remove(
243 'python-neutron-plugin-midonet')
244 plugins['midonet']['server_packages'].append(
245 'python-networking-midonet')
236 return plugins246 return plugins
237247
238248
239249
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-02-22 12:50:59 +0000
@@ -12,27 +12,35 @@
12 option tcplog12 option tcplog
13 option dontlognull13 option dontlognull
14 retries 314 retries 3
15 timeout queue 100015{%- if haproxy_queue_timeout %}
16 timeout connect 100016 timeout queue {{ haproxy_queue_timeout }}
17{% if haproxy_client_timeout -%}17{%- else %}
18 timeout queue 5000
19{%- endif %}
20{%- if haproxy_connect_timeout %}
21 timeout connect {{ haproxy_connect_timeout }}
22{%- else %}
23 timeout connect 5000
24{%- endif %}
25{%- if haproxy_client_timeout %}
18 timeout client {{ haproxy_client_timeout }}26 timeout client {{ haproxy_client_timeout }}
19{% else -%}27{%- else %}
20 timeout client 3000028 timeout client 30000
21{% endif -%}29{%- endif %}
2230{%- if haproxy_server_timeout %}
23{% if haproxy_server_timeout -%}
24 timeout server {{ haproxy_server_timeout }}31 timeout server {{ haproxy_server_timeout }}
25{% else -%}32{%- else %}
26 timeout server 3000033 timeout server 30000
27{% endif -%}34{%- endif %}
2835
29listen stats {{ stat_port }}36listen stats
37 bind {{ local_host }}:{{ stat_port }}
30 mode http38 mode http
31 stats enable39 stats enable
32 stats hide-version40 stats hide-version
33 stats realm Haproxy\ Statistics41 stats realm Haproxy\ Statistics
34 stats uri /42 stats uri /
35 stats auth admin:password43 stats auth admin:{{ stat_password }}
3644
37{% if frontends -%}45{% if frontends -%}
38{% for service, ports in service_ports.items() -%}46{% for service, ports in service_ports.items() -%}
3947
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken'
--- hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken 2016-02-22 12:50:59 +0000
@@ -1,4 +1,14 @@
1{% if auth_host -%}1{% if auth_host -%}
2{% if api_version == '3' -%}
3[keystone_authtoken]
4auth_url = {{ service_protocol }}://{{ service_host }}:{{ service_port }}
5project_name = {{ admin_tenant_name }}
6username = {{ admin_user }}
7password = {{ admin_password }}
8project_domain_name = default
9user_domain_name = default
10auth_plugin = password
11{% else -%}
2[keystone_authtoken]12[keystone_authtoken]
3identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/{{ auth_admin_prefix }}13identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/{{ auth_admin_prefix }}
4auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }}14auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }}
@@ -7,3 +17,4 @@
7admin_password = {{ admin_password }}17admin_password = {{ admin_password }}
8signing_dir = {{ signing_dir }}18signing_dir = {{ signing_dir }}
9{% endif -%}19{% endif -%}
20{% endif -%}
1021
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-02-22 12:50:59 +0000
@@ -23,8 +23,10 @@
23import os23import os
24import sys24import sys
25import re25import re
26import itertools
2627
27import six28import six
29import tempfile
28import traceback30import traceback
29import uuid31import uuid
30import yaml32import yaml
@@ -41,6 +43,7 @@
41 config,43 config,
42 log as juju_log,44 log as juju_log,
43 charm_dir,45 charm_dir,
46 DEBUG,
44 INFO,47 INFO,
45 related_units,48 related_units,
46 relation_ids,49 relation_ids,
@@ -58,6 +61,7 @@
58from charmhelpers.contrib.network.ip import (61from charmhelpers.contrib.network.ip import (
59 get_ipv6_addr,62 get_ipv6_addr,
60 is_ipv6,63 is_ipv6,
64 port_has_listener,
61)65)
6266
63from charmhelpers.contrib.python.packages import (67from charmhelpers.contrib.python.packages import (
@@ -65,7 +69,7 @@
65 pip_install,69 pip_install,
66)70)
6771
68from charmhelpers.core.host import lsb_release, mounts, umount72from charmhelpers.core.host import lsb_release, mounts, umount, service_running
69from charmhelpers.fetch import apt_install, apt_cache, install_remote73from charmhelpers.fetch import apt_install, apt_cache, install_remote
70from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk74from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
71from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device75from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
@@ -86,6 +90,7 @@
86 ('utopic', 'juno'),90 ('utopic', 'juno'),
87 ('vivid', 'kilo'),91 ('vivid', 'kilo'),
88 ('wily', 'liberty'),92 ('wily', 'liberty'),
93 ('xenial', 'mitaka'),
89])94])
9095
9196
@@ -99,61 +104,70 @@
99 ('2014.2', 'juno'),104 ('2014.2', 'juno'),
100 ('2015.1', 'kilo'),105 ('2015.1', 'kilo'),
101 ('2015.2', 'liberty'),106 ('2015.2', 'liberty'),
107 ('2016.1', 'mitaka'),
102])108])
103109
104# The ugly duckling110# The ugly duckling - must list releases oldest to newest
105SWIFT_CODENAMES = OrderedDict([111SWIFT_CODENAMES = OrderedDict([
106 ('1.4.3', 'diablo'),112 ('diablo',
107 ('1.4.8', 'essex'),113 ['1.4.3']),
108 ('1.7.4', 'folsom'),114 ('essex',
109 ('1.8.0', 'grizzly'),115 ['1.4.8']),
110 ('1.7.7', 'grizzly'),116 ('folsom',
111 ('1.7.6', 'grizzly'),117 ['1.7.4']),
112 ('1.10.0', 'havana'),118 ('grizzly',
113 ('1.9.1', 'havana'),119 ['1.7.6', '1.7.7', '1.8.0']),
114 ('1.9.0', 'havana'),120 ('havana',
115 ('1.13.1', 'icehouse'),121 ['1.9.0', '1.9.1', '1.10.0']),
116 ('1.13.0', 'icehouse'),122 ('icehouse',
117 ('1.12.0', 'icehouse'),123 ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
118 ('1.11.0', 'icehouse'),124 ('juno',
119 ('2.0.0', 'juno'),125 ['2.0.0', '2.1.0', '2.2.0']),
120 ('2.1.0', 'juno'),126 ('kilo',
121 ('2.2.0', 'juno'),127 ['2.2.1', '2.2.2']),
122 ('2.2.1', 'kilo'),128 ('liberty',
123 ('2.2.2', 'kilo'),129 ['2.3.0', '2.4.0', '2.5.0']),
124 ('2.3.0', 'liberty'),130 ('mitaka',
125 ('2.4.0', 'liberty'),131 ['2.5.0']),
126 ('2.5.0', 'liberty'),
127])132])
128133
129# >= Liberty version->codename mapping134# >= Liberty version->codename mapping
130PACKAGE_CODENAMES = {135PACKAGE_CODENAMES = {
131 'nova-common': OrderedDict([136 'nova-common': OrderedDict([
132 ('12.0.0', 'liberty'),137 ('12.0', 'liberty'),
138 ('13.0', 'mitaka'),
133 ]),139 ]),
134 'neutron-common': OrderedDict([140 'neutron-common': OrderedDict([
135 ('7.0.0', 'liberty'),141 ('7.0', 'liberty'),
142 ('8.0', 'mitaka'),
136 ]),143 ]),
137 'cinder-common': OrderedDict([144 'cinder-common': OrderedDict([
138 ('7.0.0', 'liberty'),145 ('7.0', 'liberty'),
146 ('8.0', 'mitaka'),
139 ]),147 ]),
140 'keystone': OrderedDict([148 'keystone': OrderedDict([
141 ('8.0.0', 'liberty'),149 ('8.0', 'liberty'),
150 ('9.0', 'mitaka'),
142 ]),151 ]),
143 'horizon-common': OrderedDict([152 'horizon-common': OrderedDict([
144 ('8.0.0', 'liberty'),153 ('8.0', 'liberty'),
154 ('9.0', 'mitaka'),
145 ]),155 ]),
146 'ceilometer-common': OrderedDict([156 'ceilometer-common': OrderedDict([
147 ('5.0.0', 'liberty'),157 ('5.0', 'liberty'),
158 ('6.0', 'mitaka'),
148 ]),159 ]),
149 'heat-common': OrderedDict([160 'heat-common': OrderedDict([
150 ('5.0.0', 'liberty'),161 ('5.0', 'liberty'),
162 ('6.0', 'mitaka'),
151 ]),163 ]),
152 'glance-common': OrderedDict([164 'glance-common': OrderedDict([
153 ('11.0.0', 'liberty'),165 ('11.0', 'liberty'),
166 ('12.0', 'mitaka'),
154 ]),167 ]),
155 'openstack-dashboard': OrderedDict([168 'openstack-dashboard': OrderedDict([
156 ('8.0.0', 'liberty'),169 ('8.0', 'liberty'),
170 ('9.0', 'mitaka'),
157 ]),171 ]),
158}172}
159173
@@ -216,6 +230,33 @@
216 error_out(e)230 error_out(e)
217231
218232
233def get_os_version_codename_swift(codename):
234 '''Determine OpenStack version number of swift from codename.'''
235 for k, v in six.iteritems(SWIFT_CODENAMES):
236 if k == codename:
237 return v[-1]
238 e = 'Could not derive swift version for '\
239 'codename: %s' % codename
240 error_out(e)
241
242
243def get_swift_codename(version):
244 '''Determine OpenStack codename that corresponds to swift version.'''
245 codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
246 if len(codenames) > 1:
247 # If more than one release codename contains this version we determine
248 # the actual codename based on the highest available install source.
249 for codename in reversed(codenames):
250 releases = UBUNTU_OPENSTACK_RELEASE
251 release = [k for k, v in six.iteritems(releases) if codename in v]
252 ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
253 if codename in ret or release[0] in ret:
254 return codename
255 elif len(codenames) == 1:
256 return codenames[0]
257 return None
258
259
219def get_os_codename_package(package, fatal=True):260def get_os_codename_package(package, fatal=True):
220 '''Derive OpenStack release codename from an installed package.'''261 '''Derive OpenStack release codename from an installed package.'''
221 import apt_pkg as apt262 import apt_pkg as apt
@@ -240,7 +281,14 @@
240 error_out(e)281 error_out(e)
241282
242 vers = apt.upstream_version(pkg.current_ver.ver_str)283 vers = apt.upstream_version(pkg.current_ver.ver_str)
243 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)284 if 'swift' in pkg.name:
285 # Fully x.y.z match for swift versions
286 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
287 else:
288 # x.y match only for 20XX.X
289 # and ignore patch level for other packages
290 match = re.match('^(\d+)\.(\d+)', vers)
291
244 if match:292 if match:
245 vers = match.group(0)293 vers = match.group(0)
246294
@@ -252,13 +300,8 @@
252 # < Liberty co-ordinated project versions300 # < Liberty co-ordinated project versions
253 try:301 try:
254 if 'swift' in pkg.name:302 if 'swift' in pkg.name:
255 swift_vers = vers[:5]303 return get_swift_codename(vers)
256 if swift_vers not in SWIFT_CODENAMES:
257 # Deal with 1.10.0 upward
258 swift_vers = vers[:6]
259 return SWIFT_CODENAMES[swift_vers]
260 else:304 else:
261 vers = vers[:6]
262 return OPENSTACK_CODENAMES[vers]305 return OPENSTACK_CODENAMES[vers]
263 except KeyError:306 except KeyError:
264 if not fatal:307 if not fatal:
@@ -276,12 +319,14 @@
276319
277 if 'swift' in pkg:320 if 'swift' in pkg:
278 vers_map = SWIFT_CODENAMES321 vers_map = SWIFT_CODENAMES
322 for cname, version in six.iteritems(vers_map):
323 if cname == codename:
324 return version[-1]
279 else:325 else:
280 vers_map = OPENSTACK_CODENAMES326 vers_map = OPENSTACK_CODENAMES
281327 for version, cname in six.iteritems(vers_map):
282 for version, cname in six.iteritems(vers_map):328 if cname == codename:
283 if cname == codename:329 return version
284 return version
285 # e = "Could not determine OpenStack version for package: %s" % pkg330 # e = "Could not determine OpenStack version for package: %s" % pkg
286 # error_out(e)331 # error_out(e)
287332
@@ -306,12 +351,42 @@
306351
307352
308def import_key(keyid):353def import_key(keyid):
309 cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \354 key = keyid.strip()
310 "--recv-keys %s" % keyid355 if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
311 try:356 key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
312 subprocess.check_call(cmd.split(' '))357 juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
313 except subprocess.CalledProcessError:358 juju_log("Importing ASCII Armor PGP key", level=DEBUG)
314 error_out("Error importing repo key %s" % keyid)359 with tempfile.NamedTemporaryFile() as keyfile:
360 with open(keyfile.name, 'w') as fd:
361 fd.write(key)
362 fd.write("\n")
363
364 cmd = ['apt-key', 'add', keyfile.name]
365 try:
366 subprocess.check_call(cmd)
367 except subprocess.CalledProcessError:
368 error_out("Error importing PGP key '%s'" % key)
369 else:
370 juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
371 juju_log("Importing PGP key from keyserver", level=DEBUG)
372 cmd = ['apt-key', 'adv', '--keyserver',
373 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
374 try:
375 subprocess.check_call(cmd)
376 except subprocess.CalledProcessError:
377 error_out("Error importing PGP key '%s'" % key)
378
379
380def get_source_and_pgp_key(input):
381 """Look for a pgp key ID or ascii-armor key in the given input."""
382 index = input.strip()
383 index = input.rfind('|')
384 if index < 0:
385 return input, None
386
387 key = input[index + 1:].strip('|')
388 source = input[:index]
389 return source, key
315390
316391
317def configure_installation_source(rel):392def configure_installation_source(rel):
@@ -323,16 +398,16 @@
323 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:398 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
324 f.write(DISTRO_PROPOSED % ubuntu_rel)399 f.write(DISTRO_PROPOSED % ubuntu_rel)
325 elif rel[:4] == "ppa:":400 elif rel[:4] == "ppa:":
326 src = rel401 src, key = get_source_and_pgp_key(rel)
402 if key:
403 import_key(key)
404
327 subprocess.check_call(["add-apt-repository", "-y", src])405 subprocess.check_call(["add-apt-repository", "-y", src])
328 elif rel[:3] == "deb":406 elif rel[:3] == "deb":
329 l = len(rel.split('|'))407 src, key = get_source_and_pgp_key(rel)
330 if l == 2:408 if key:
331 src, key = rel.split('|')
332 juju_log("Importing PPA key from keyserver for %s" % src)
333 import_key(key)409 import_key(key)
334 elif l == 1:410
335 src = rel
336 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:411 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
337 f.write(src)412 f.write(src)
338 elif rel[:6] == 'cloud:':413 elif rel[:6] == 'cloud:':
@@ -377,6 +452,9 @@
377 'liberty': 'trusty-updates/liberty',452 'liberty': 'trusty-updates/liberty',
378 'liberty/updates': 'trusty-updates/liberty',453 'liberty/updates': 'trusty-updates/liberty',
379 'liberty/proposed': 'trusty-proposed/liberty',454 'liberty/proposed': 'trusty-proposed/liberty',
455 'mitaka': 'trusty-updates/mitaka',
456 'mitaka/updates': 'trusty-updates/mitaka',
457 'mitaka/proposed': 'trusty-proposed/mitaka',
380 }458 }
381459
382 try:460 try:
@@ -444,11 +522,16 @@
444 cur_vers = get_os_version_package(package)522 cur_vers = get_os_version_package(package)
445 if "swift" in package:523 if "swift" in package:
446 codename = get_os_codename_install_source(src)524 codename = get_os_codename_install_source(src)
447 available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)525 avail_vers = get_os_version_codename_swift(codename)
448 else:526 else:
449 available_vers = get_os_version_install_source(src)527 avail_vers = get_os_version_install_source(src)
450 apt.init()528 apt.init()
451 return apt.version_compare(available_vers, cur_vers) == 1529 if "swift" in package:
530 major_cur_vers = cur_vers.split('.', 1)[0]
531 major_avail_vers = avail_vers.split('.', 1)[0]
532 major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
533 return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
534 return apt.version_compare(avail_vers, cur_vers) == 1
452535
453536
454def ensure_block_device(block_device):537def ensure_block_device(block_device):
@@ -577,7 +660,7 @@
577 return yaml.load(projects_yaml)660 return yaml.load(projects_yaml)
578661
579662
580def git_clone_and_install(projects_yaml, core_project, depth=1):663def git_clone_and_install(projects_yaml, core_project):
581 """664 """
582 Clone/install all specified OpenStack repositories.665 Clone/install all specified OpenStack repositories.
583666
@@ -627,6 +710,9 @@
627 for p in projects['repositories']:710 for p in projects['repositories']:
628 repo = p['repository']711 repo = p['repository']
629 branch = p['branch']712 branch = p['branch']
713 depth = '1'
714 if 'depth' in p.keys():
715 depth = p['depth']
630 if p['name'] == 'requirements':716 if p['name'] == 'requirements':
631 repo_dir = _git_clone_and_install_single(repo, branch, depth,717 repo_dir = _git_clone_and_install_single(repo, branch, depth,
632 parent_dir, http_proxy,718 parent_dir, http_proxy,
@@ -671,19 +757,13 @@
671 """757 """
672 Clone and install a single git repository.758 Clone and install a single git repository.
673 """759 """
674 dest_dir = os.path.join(parent_dir, os.path.basename(repo))
675
676 if not os.path.exists(parent_dir):760 if not os.path.exists(parent_dir):
677 juju_log('Directory already exists at {}. '761 juju_log('Directory already exists at {}. '
678 'No need to create directory.'.format(parent_dir))762 'No need to create directory.'.format(parent_dir))
679 os.mkdir(parent_dir)763 os.mkdir(parent_dir)
680764
681 if not os.path.exists(dest_dir):765 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
682 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))766 repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth)
683 repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
684 depth=depth)
685 else:
686 repo_dir = dest_dir
687767
688 venv = os.path.join(parent_dir, 'venv')768 venv = os.path.join(parent_dir, 'venv')
689769
@@ -782,13 +862,23 @@
782 return wrap862 return wrap
783863
784864
785def set_os_workload_status(configs, required_interfaces, charm_func=None):865def set_os_workload_status(configs, required_interfaces, charm_func=None, services=None, ports=None):
786 """866 """
787 Set workload status based on complete contexts.867 Set workload status based on complete contexts.
788 status-set missing or incomplete contexts868 status-set missing or incomplete contexts
789 and juju-log details of missing required data.869 and juju-log details of missing required data.
790 charm_func is a charm specific function to run checking870 charm_func is a charm specific function to run checking
791 for charm specific requirements such as a VIP setting.871 for charm specific requirements such as a VIP setting.
872
873 This function also checks for whether the services defined are ACTUALLY
874 running and that the ports they advertise are open and being listened to.
875
876 @param services - OPTIONAL: a [{'service': <string>, 'ports': [<int>]]
877 The ports are optional.
878 If services is a [<string>] then ports are ignored.
879 @param ports - OPTIONAL: an [<int>] representing ports that shoudl be
880 open.
881 @returns None
792 """882 """
793 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)883 incomplete_rel_data = incomplete_relation_data(configs, required_interfaces)
794 state = 'active'884 state = 'active'
@@ -867,6 +957,65 @@
867 else:957 else:
868 message = charm_message958 message = charm_message
869959
960 # If the charm thinks the unit is active, check that the actual services
961 # really are active.
962 if services is not None and state == 'active':
963 # if we're passed the dict() then just grab the values as a list.
964 if isinstance(services, dict):
965 services = services.values()
966 # either extract the list of services from the dictionary, or if
967 # it is a simple string, use that. i.e. works with mixed lists.
968 _s = []
969 for s in services:
970 if isinstance(s, dict) and 'service' in s:
971 _s.append(s['service'])
972 if isinstance(s, str):
973 _s.append(s)
974 services_running = [service_running(s) for s in _s]
975 if not all(services_running):
976 not_running = [s for s, running in zip(_s, services_running)
977 if not running]
978 message = ("Services not running that should be: {}"
979 .format(", ".join(not_running)))
980 state = 'blocked'
981 # also verify that the ports that should be open are open
982 # NB, that ServiceManager objects only OPTIONALLY have ports
983 port_map = OrderedDict([(s['service'], s['ports'])
984 for s in services if 'ports' in s])
985 if state == 'active' and port_map:
986 all_ports = list(itertools.chain(*port_map.values()))
987 ports_open = [port_has_listener('0.0.0.0', p)
988 for p in all_ports]
989 if not all(ports_open):
990 not_opened = [p for p, opened in zip(all_ports, ports_open)
991 if not opened]
992 map_not_open = OrderedDict()
993 for service, ports in port_map.items():
994 closed_ports = set(ports).intersection(not_opened)
995 if closed_ports:
996 map_not_open[service] = closed_ports
997 # find which service has missing ports. They are in service
998 # order which makes it a bit easier.
999 message = (
1000 "Services with ports not open that should be: {}"
1001 .format(
1002 ", ".join([
1003 "{}: [{}]".format(
1004 service,
1005 ", ".join([str(v) for v in ports]))
1006 for service, ports in map_not_open.items()])))
1007 state = 'blocked'
1008
1009 if ports is not None and state == 'active':
1010 # and we can also check ports which we don't know the service for
1011 ports_open = [port_has_listener('0.0.0.0', p) for p in ports]
1012 if not all(ports_open):
1013 message = (
1014 "Ports which should be open, but are not: {}"
1015 .format(", ".join([str(p) for p, v in zip(ports, ports_open)
1016 if not v])))
1017 state = 'blocked'
1018
870 # Set to active if all requirements have been met1019 # Set to active if all requirements have been met
871 if state == 'active':1020 if state == 'active':
872 message = "Unit is ready"1021 message = "Unit is ready"
8731022
=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
--- hooks/charmhelpers/contrib/python/packages.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/python/packages.py 2016-02-22 12:50:59 +0000
@@ -19,20 +19,35 @@
1919
20import os20import os
21import subprocess21import subprocess
22import sys
2223
23from charmhelpers.fetch import apt_install, apt_update24from charmhelpers.fetch import apt_install, apt_update
24from charmhelpers.core.hookenv import charm_dir, log25from charmhelpers.core.hookenv import charm_dir, log
2526
26try:
27 from pip import main as pip_execute
28except ImportError:
29 apt_update()
30 apt_install('python-pip')
31 from pip import main as pip_execute
32
33__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"27__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
3428
3529
30def pip_execute(*args, **kwargs):
31 """Overriden pip_execute() to stop sys.path being changed.
32
33 The act of importing main from the pip module seems to cause add wheels
34 from the /usr/share/python-wheels which are installed by various tools.
35 This function ensures that sys.path remains the same after the call is
36 executed.
37 """
38 try:
39 _path = sys.path
40 try:
41 from pip import main as _pip_execute
42 except ImportError:
43 apt_update()
44 apt_install('python-pip')
45 from pip import main as _pip_execute
46 _pip_execute(*args, **kwargs)
47 finally:
48 sys.path = _path
49
50
36def parse_options(given, available):51def parse_options(given, available):
37 """Given a set of options, check if available"""52 """Given a set of options, check if available"""
38 for key, value in sorted(given.items()):53 for key, value in sorted(given.items()):
@@ -42,8 +57,12 @@
42 yield "--{0}={1}".format(key, value)57 yield "--{0}={1}".format(key, value)
4358
4459
45def pip_install_requirements(requirements, **options):60def pip_install_requirements(requirements, constraints=None, **options):
46 """Install a requirements file """61 """Install a requirements file.
62
63 :param constraints: Path to pip constraints file.
64 http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
65 """
47 command = ["install"]66 command = ["install"]
4867
49 available_options = ('proxy', 'src', 'log', )68 available_options = ('proxy', 'src', 'log', )
@@ -51,8 +70,13 @@
51 command.append(option)70 command.append(option)
5271
53 command.append("-r {0}".format(requirements))72 command.append("-r {0}".format(requirements))
54 log("Installing from file: {} with options: {}".format(requirements,73 if constraints:
55 command))74 command.append("-c {0}".format(constraints))
75 log("Installing from file: {} with constraints {} "
76 "and options: {}".format(requirements, constraints, command))
77 else:
78 log("Installing from file: {} with options: {}".format(requirements,
79 command))
56 pip_execute(command)80 pip_execute(command)
5781
5882
5983
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-02-22 12:50:59 +0000
@@ -23,6 +23,8 @@
23# James Page <james.page@ubuntu.com>23# James Page <james.page@ubuntu.com>
24# Adam Gandelman <adamg@ubuntu.com>24# Adam Gandelman <adamg@ubuntu.com>
25#25#
26import bisect
27import six
2628
27import os29import os
28import shutil30import shutil
@@ -72,6 +74,394 @@
72err to syslog = {use_syslog}74err to syslog = {use_syslog}
73clog to syslog = {use_syslog}75clog to syslog = {use_syslog}
74"""76"""
77# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
78powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
79
80
81def validator(value, valid_type, valid_range=None):
82 """
83 Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
84 Example input:
85 validator(value=1,
86 valid_type=int,
87 valid_range=[0, 2])
88 This says I'm testing value=1. It must be an int inclusive in [0,2]
89
90 :param value: The value to validate
91 :param valid_type: The type that value should be.
92 :param valid_range: A range of values that value can assume.
93 :return:
94 """
95 assert isinstance(value, valid_type), "{} is not a {}".format(
96 value,
97 valid_type)
98 if valid_range is not None:
99 assert isinstance(valid_range, list), \
100 "valid_range must be a list, was given {}".format(valid_range)
101 # If we're dealing with strings
102 if valid_type is six.string_types:
103 assert value in valid_range, \
104 "{} is not in the list {}".format(value, valid_range)
105 # Integer, float should have a min and max
106 else:
107 if len(valid_range) != 2:
108 raise ValueError(
109 "Invalid valid_range list of {} for {}. "
110 "List must be [min,max]".format(valid_range, value))
111 assert value >= valid_range[0], \
112 "{} is less than minimum allowed value of {}".format(
113 value, valid_range[0])
114 assert value <= valid_range[1], \
115 "{} is greater than maximum allowed value of {}".format(
116 value, valid_range[1])
117
118
119class PoolCreationError(Exception):
120 """
121 A custom error to inform the caller that a pool creation failed. Provides an error message
122 """
123 def __init__(self, message):
124 super(PoolCreationError, self).__init__(message)
125
126
127class Pool(object):
128 """
129 An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
130 Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
131 """
132 def __init__(self, service, name):
133 self.service = service
134 self.name = name
135
136 # Create the pool if it doesn't exist already
137 # To be implemented by subclasses
138 def create(self):
139 pass
140
141 def add_cache_tier(self, cache_pool, mode):
142 """
143 Adds a new cache tier to an existing pool.
144 :param cache_pool: six.string_types. The cache tier pool name to add.
145 :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
146 :return: None
147 """
148 # Check the input types and values
149 validator(value=cache_pool, valid_type=six.string_types)
150 validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
151
152 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
153 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
154 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
155 check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
156
157 def remove_cache_tier(self, cache_pool):
158 """
159 Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
160 :param cache_pool: six.string_types. The cache tier pool name to remove.
161 :return: None
162 """
163 # read-only is easy, writeback is much harder
164 mode = get_cache_mode(cache_pool)
165 if mode == 'readonly':
166 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
167 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
168
169 elif mode == 'writeback':
170 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
171 # Flush the cache and wait for it to return
172 check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
173 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
174 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
175
176 def get_pgs(self, pool_size):
177 """
178 :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
179 erasure coded pools
180 :return: int. The number of pgs to use.
181 """
182 validator(value=pool_size, valid_type=int)
183 osds = get_osds(self.service)
184 if not osds:
185 # NOTE(james-page): Default to 200 for older ceph versions
186 # which don't support OSD query from cli
187 return 200
188
189 # Calculate based on Ceph best practices
190 if osds < 5:
191 return 128
192 elif 5 < osds < 10:
193 return 512
194 elif 10 < osds < 50:
195 return 4096
196 else:
197 estimate = (osds * 100) / pool_size
198 # Return the next nearest power of 2
199 index = bisect.bisect_right(powers_of_two, estimate)
200 return powers_of_two[index]
201
202
203class ReplicatedPool(Pool):
204 def __init__(self, service, name, replicas=2):
205 super(ReplicatedPool, self).__init__(service=service, name=name)
206 self.replicas = replicas
207
208 def create(self):
209 if not pool_exists(self.service, self.name):
210 # Create it
211 pgs = self.get_pgs(self.replicas)
212 cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
213 try:
214 check_call(cmd)
215 except CalledProcessError:
216 raise
217
218
219# Default jerasure erasure coded pool
220class ErasurePool(Pool):
221 def __init__(self, service, name, erasure_code_profile="default"):
222 super(ErasurePool, self).__init__(service=service, name=name)
223 self.erasure_code_profile = erasure_code_profile
224
225 def create(self):
226 if not pool_exists(self.service, self.name):
227 # Try to find the erasure profile information so we can properly size the pgs
228 erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
229
230 # Check for errors
231 if erasure_profile is None:
232 log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
233 level=ERROR)
234 raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
235 if 'k' not in erasure_profile or 'm' not in erasure_profile:
236 # Error
237 log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
238 level=ERROR)
239 raise PoolCreationError(
240 message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
241
242 pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
243 # Create it
244 cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
245 'erasure', self.erasure_code_profile]
246 try:
247 check_call(cmd)
248 except CalledProcessError:
249 raise
250
251 """Get an existing erasure code profile if it already exists.
252 Returns json formatted output"""
253
254
255def get_erasure_profile(service, name):
256 """
257 :param service: six.string_types. The Ceph user name to run the command under
258 :param name:
259 :return:
260 """
261 try:
262 out = check_output(['ceph', '--id', service,
263 'osd', 'erasure-code-profile', 'get',
264 name, '--format=json'])
265 return json.loads(out)
266 except (CalledProcessError, OSError, ValueError):
267 return None
268
269
270def pool_set(service, pool_name, key, value):
271 """
272 Sets a value for a RADOS pool in ceph.
273 :param service: six.string_types. The Ceph user name to run the command under
274 :param pool_name: six.string_types
275 :param key: six.string_types
276 :param value:
277 :return: None. Can raise CalledProcessError
278 """
279 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
280 try:
281 check_call(cmd)
282 except CalledProcessError:
283 raise
284
285
286def snapshot_pool(service, pool_name, snapshot_name):
287 """
288 Snapshots a RADOS pool in ceph.
289 :param service: six.string_types. The Ceph user name to run the command under
290 :param pool_name: six.string_types
291 :param snapshot_name: six.string_types
292 :return: None. Can raise CalledProcessError
293 """
294 cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
295 try:
296 check_call(cmd)
297 except CalledProcessError:
298 raise
299
300
301def remove_pool_snapshot(service, pool_name, snapshot_name):
302 """
303 Remove a snapshot from a RADOS pool in ceph.
304 :param service: six.string_types. The Ceph user name to run the command under
305 :param pool_name: six.string_types
306 :param snapshot_name: six.string_types
307 :return: None. Can raise CalledProcessError
308 """
309 cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
310 try:
311 check_call(cmd)
312 except CalledProcessError:
313 raise
314
315
316# max_bytes should be an int or long
317def set_pool_quota(service, pool_name, max_bytes):
318 """
319 :param service: six.string_types. The Ceph user name to run the command under
320 :param pool_name: six.string_types
321 :param max_bytes: int or long
322 :return: None. Can raise CalledProcessError
323 """
324 # Set a byte quota on a RADOS pool in ceph.
325 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
326 try:
327 check_call(cmd)
328 except CalledProcessError:
329 raise
330
331
332def remove_pool_quota(service, pool_name):
333 """
334 Set a byte quota on a RADOS pool in ceph.
335 :param service: six.string_types. The Ceph user name to run the command under
336 :param pool_name: six.string_types
337 :return: None. Can raise CalledProcessError
338 """
339 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
340 try:
341 check_call(cmd)
342 except CalledProcessError:
343 raise
344
345
346def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
347 data_chunks=2, coding_chunks=1,
348 locality=None, durability_estimator=None):
349 """
350 Create a new erasure code profile if one does not already exist for it. Updates
351 the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
352 for more details
353 :param service: six.string_types. The Ceph user name to run the command under
354 :param profile_name: six.string_types
355 :param erasure_plugin_name: six.string_types
356 :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
357 'room', 'root', 'row'])
358 :param data_chunks: int
359 :param coding_chunks: int
360 :param locality: int
361 :param durability_estimator: int
362 :return: None. Can raise CalledProcessError
363 """
364 # Ensure this failure_domain is allowed by Ceph
365 validator(failure_domain, six.string_types,
366 ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
367
368 cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
369 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
370 'ruleset_failure_domain=' + failure_domain]
371 if locality is not None and durability_estimator is not None:
372 raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
373
374 # Add plugin specific information
375 if locality is not None:
376 # For local erasure codes
377 cmd.append('l=' + str(locality))
378 if durability_estimator is not None:
379 # For Shec erasure codes
380 cmd.append('c=' + str(durability_estimator))
381
382 if erasure_profile_exists(service, profile_name):
383 cmd.append('--force')
384
385 try:
386 check_call(cmd)
387 except CalledProcessError:
388 raise
389
390
391def rename_pool(service, old_name, new_name):
392 """
393 Rename a Ceph pool from old_name to new_name
394 :param service: six.string_types. The Ceph user name to run the command under
395 :param old_name: six.string_types
396 :param new_name: six.string_types
397 :return: None
398 """
399 validator(value=old_name, valid_type=six.string_types)
400 validator(value=new_name, valid_type=six.string_types)
401
402 cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
403 check_call(cmd)
404
405
406def erasure_profile_exists(service, name):
407 """
408 Check to see if an Erasure code profile already exists.
409 :param service: six.string_types. The Ceph user name to run the command under
410 :param name: six.string_types
411 :return: int or None
412 """
413 validator(value=name, valid_type=six.string_types)
414 try:
415 check_call(['ceph', '--id', service,
416 'osd', 'erasure-code-profile', 'get',
417 name])
418 return True
419 except CalledProcessError:
420 return False
421
422
423def get_cache_mode(service, pool_name):
424 """
425 Find the current caching mode of the pool_name given.
426 :param service: six.string_types. The Ceph user name to run the command under
427 :param pool_name: six.string_types
428 :return: int or None
429 """
430 validator(value=service, valid_type=six.string_types)
431 validator(value=pool_name, valid_type=six.string_types)
432 out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
433 try:
434 osd_json = json.loads(out)
435 for pool in osd_json['pools']:
436 if pool['pool_name'] == pool_name:
437 return pool['cache_mode']
438 return None
439 except ValueError:
440 raise
441
442
443def pool_exists(service, name):
444 """Check to see if a RADOS pool already exists."""
445 try:
446 out = check_output(['rados', '--id', service,
447 'lspools']).decode('UTF-8')
448 except CalledProcessError:
449 return False
450
451 return name in out
452
453
454def get_osds(service):
455 """Return a list of all Ceph Object Storage Daemons currently in the
456 cluster.
457 """
458 version = ceph_version()
459 if version and version >= '0.56':
460 return json.loads(check_output(['ceph', '--id', service,
461 'osd', 'ls',
462 '--format=json']).decode('UTF-8'))
463
464 return None
75465
76466
77def install():467def install():
@@ -101,53 +491,37 @@
101 check_call(cmd)491 check_call(cmd)
102492
103493
104def pool_exists(service, name):494def update_pool(client, pool, settings):
105 """Check to see if a RADOS pool already exists."""495 cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
106 try:496 for k, v in six.iteritems(settings):
107 out = check_output(['rados', '--id', service,497 cmd.append(k)
108 'lspools']).decode('UTF-8')498 cmd.append(v)
109 except CalledProcessError:499
110 return False500 check_call(cmd)
111501
112 return name in out502
113503def create_pool(service, name, replicas=3, pg_num=None):
114
115def get_osds(service):
116 """Return a list of all Ceph Object Storage Daemons currently in the
117 cluster.
118 """
119 version = ceph_version()
120 if version and version >= '0.56':
121 return json.loads(check_output(['ceph', '--id', service,
122 'osd', 'ls',
123 '--format=json']).decode('UTF-8'))
124
125 return None
126
127
128def create_pool(service, name, replicas=3):
129 """Create a new RADOS pool."""504 """Create a new RADOS pool."""
130 if pool_exists(service, name):505 if pool_exists(service, name):
131 log("Ceph pool {} already exists, skipping creation".format(name),506 log("Ceph pool {} already exists, skipping creation".format(name),
132 level=WARNING)507 level=WARNING)
133 return508 return
134509
135 # Calculate the number of placement groups based510 if not pg_num:
136 # on upstream recommended best practices.511 # Calculate the number of placement groups based
137 osds = get_osds(service)512 # on upstream recommended best practices.
138 if osds:513 osds = get_osds(service)
139 pgnum = (len(osds) * 100 // replicas)514 if osds:
140 else:515 pg_num = (len(osds) * 100 // replicas)
141 # NOTE(james-page): Default to 200 for older ceph versions516 else:
142 # which don't support OSD query from cli517 # NOTE(james-page): Default to 200 for older ceph versions
143 pgnum = 200518 # which don't support OSD query from cli
144519 pg_num = 200
145 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]520
146 check_call(cmd)521 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
147522 check_call(cmd)
148 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',523
149 str(replicas)]524 update_pool(service, name, settings={'size': str(replicas)})
150 check_call(cmd)
151525
152526
153def delete_pool(service, name):527def delete_pool(service, name):
@@ -202,10 +576,10 @@
202 log('Created new keyfile at %s.' % keyfile, level=INFO)576 log('Created new keyfile at %s.' % keyfile, level=INFO)
203577
204578
205def get_ceph_nodes():579def get_ceph_nodes(relation='ceph'):
206 """Query named relation 'ceph' to determine current nodes."""580 """Query named relation to determine current nodes."""
207 hosts = []581 hosts = []
208 for r_id in relation_ids('ceph'):582 for r_id in relation_ids(relation):
209 for unit in related_units(r_id):583 for unit in related_units(r_id):
210 hosts.append(relation_get('private-address', unit=unit, rid=r_id))584 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
211585
@@ -357,14 +731,14 @@
357 service_start(svc)731 service_start(svc)
358732
359733
360def ensure_ceph_keyring(service, user=None, group=None):734def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
361 """Ensures a ceph keyring is created for a named service and optionally735 """Ensures a ceph keyring is created for a named service and optionally
362 ensures user and group ownership.736 ensures user and group ownership.
363737
364 Returns False if no ceph key is available in relation state.738 Returns False if no ceph key is available in relation state.
365 """739 """
366 key = None740 key = None
367 for rid in relation_ids('ceph'):741 for rid in relation_ids(relation):
368 for unit in related_units(rid):742 for unit in related_units(rid):
369 key = relation_get('key', rid=rid, unit=unit)743 key = relation_get('key', rid=rid, unit=unit)
370 if key:744 if key:
@@ -405,6 +779,7 @@
405779
406 The API is versioned and defaults to version 1.780 The API is versioned and defaults to version 1.
407 """781 """
782
408 def __init__(self, api_version=1, request_id=None):783 def __init__(self, api_version=1, request_id=None):
409 self.api_version = api_version784 self.api_version = api_version
410 if request_id:785 if request_id:
@@ -413,9 +788,16 @@
413 self.request_id = str(uuid.uuid1())788 self.request_id = str(uuid.uuid1())
414 self.ops = []789 self.ops = []
415790
416 def add_op_create_pool(self, name, replica_count=3):791 def add_op_create_pool(self, name, replica_count=3, pg_num=None):
792 """Adds an operation to create a pool.
793
794 @param pg_num setting: optional setting. If not provided, this value
795 will be calculated by the broker based on how many OSDs are in the
796 cluster at the time of creation. Note that, if provided, this value
797 will be capped at the current available maximum.
798 """
417 self.ops.append({'op': 'create-pool', 'name': name,799 self.ops.append({'op': 'create-pool', 'name': name,
418 'replicas': replica_count})800 'replicas': replica_count, 'pg_num': pg_num})
419801
420 def set_ops(self, ops):802 def set_ops(self, ops):
421 """Set request ops to provided value.803 """Set request ops to provided value.
@@ -433,8 +815,8 @@
433 def _ops_equal(self, other):815 def _ops_equal(self, other):
434 if len(self.ops) == len(other.ops):816 if len(self.ops) == len(other.ops):
435 for req_no in range(0, len(self.ops)):817 for req_no in range(0, len(self.ops)):
436 for key in ['replicas', 'name', 'op']:818 for key in ['replicas', 'name', 'op', 'pg_num']:
437 if self.ops[req_no][key] != other.ops[req_no][key]:819 if self.ops[req_no].get(key) != other.ops[req_no].get(key):
438 return False820 return False
439 else:821 else:
440 return False822 return False
@@ -540,7 +922,7 @@
540 return request922 return request
541923
542924
543def get_request_states(request):925def get_request_states(request, relation='ceph'):
544 """Return a dict of requests per relation id with their corresponding926 """Return a dict of requests per relation id with their corresponding
545 completion state.927 completion state.
546928
@@ -552,7 +934,7 @@
552 """934 """
553 complete = []935 complete = []
554 requests = {}936 requests = {}
555 for rid in relation_ids('ceph'):937 for rid in relation_ids(relation):
556 complete = False938 complete = False
557 previous_request = get_previous_request(rid)939 previous_request = get_previous_request(rid)
558 if request == previous_request:940 if request == previous_request:
@@ -570,14 +952,14 @@
570 return requests952 return requests
571953
572954
573def is_request_sent(request):955def is_request_sent(request, relation='ceph'):
574 """Check to see if a functionally equivalent request has already been sent956 """Check to see if a functionally equivalent request has already been sent
575957
576 Returns True if a similair request has been sent958 Returns True if a similair request has been sent
577959
578 @param request: A CephBrokerRq object960 @param request: A CephBrokerRq object
579 """961 """
580 states = get_request_states(request)962 states = get_request_states(request, relation=relation)
581 for rid in states.keys():963 for rid in states.keys():
582 if not states[rid]['sent']:964 if not states[rid]['sent']:
583 return False965 return False
@@ -585,7 +967,7 @@
585 return True967 return True
586968
587969
588def is_request_complete(request):970def is_request_complete(request, relation='ceph'):
589 """Check to see if a functionally equivalent request has already been971 """Check to see if a functionally equivalent request has already been
590 completed972 completed
591973
@@ -593,7 +975,7 @@
593975
594 @param request: A CephBrokerRq object976 @param request: A CephBrokerRq object
595 """977 """
596 states = get_request_states(request)978 states = get_request_states(request, relation=relation)
597 for rid in states.keys():979 for rid in states.keys():
598 if not states[rid]['complete']:980 if not states[rid]['complete']:
599 return False981 return False
@@ -643,15 +1025,15 @@
643 return 'broker-rsp-' + local_unit().replace('/', '-')1025 return 'broker-rsp-' + local_unit().replace('/', '-')
6441026
6451027
646def send_request_if_needed(request):1028def send_request_if_needed(request, relation='ceph'):
647 """Send broker request if an equivalent request has not already been sent1029 """Send broker request if an equivalent request has not already been sent
6481030
649 @param request: A CephBrokerRq object1031 @param request: A CephBrokerRq object
650 """1032 """
651 if is_request_sent(request):1033 if is_request_sent(request, relation=relation):
652 log('Request already sent but not complete, not sending new request',1034 log('Request already sent but not complete, not sending new request',
653 level=DEBUG)1035 level=DEBUG)
654 else:1036 else:
655 for rid in relation_ids('ceph'):1037 for rid in relation_ids(relation):
656 log('Sending request {}'.format(request.request_id), level=DEBUG)1038 log('Sending request {}'.format(request.request_id), level=DEBUG)
657 relation_set(relation_id=rid, broker_req=request.request)1039 relation_set(relation_id=rid, broker_req=request.request)
6581040
=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-02-22 12:50:59 +0000
@@ -76,3 +76,13 @@
76 check_call(cmd)76 check_call(cmd)
7777
78 return create_loopback(path)78 return create_loopback(path)
79
80
81def is_mapped_loopback_device(device):
82 """
83 Checks if a given device name is an existing/mapped loopback device.
84 :param device: str: Full path to the device (eg, /dev/loop1).
85 :returns: str: Path to the backing file if is a loopback device
86 empty string otherwise
87 """
88 return loopback_devices().get(device, "")
7989
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/hookenv.py 2016-02-22 12:50:59 +0000
@@ -492,7 +492,7 @@
492492
493@cached493@cached
494def peer_relation_id():494def peer_relation_id():
495 '''Get a peer relation id if a peer relation has been joined, else None.'''495 '''Get the peers relation id if a peers relation has been joined, else None.'''
496 md = metadata()496 md = metadata()
497 section = md.get('peers')497 section = md.get('peers')
498 if section:498 if section:
@@ -517,12 +517,12 @@
517def relation_to_role_and_interface(relation_name):517def relation_to_role_and_interface(relation_name):
518 """518 """
519 Given the name of a relation, return the role and the name of the interface519 Given the name of a relation, return the role and the name of the interface
520 that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).520 that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
521521
522 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.522 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
523 """523 """
524 _metadata = metadata()524 _metadata = metadata()
525 for role in ('provides', 'requires', 'peer'):525 for role in ('provides', 'requires', 'peers'):
526 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')526 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
527 if interface:527 if interface:
528 return role, interface528 return role, interface
@@ -534,7 +534,7 @@
534 """534 """
535 Given a role and interface name, return a list of relation names for the535 Given a role and interface name, return a list of relation names for the
536 current charm that use that interface under that role (where role is one536 current charm that use that interface under that role (where role is one
537 of ``provides``, ``requires``, or ``peer``).537 of ``provides``, ``requires``, or ``peers``).
538538
539 :returns: A list of relation names.539 :returns: A list of relation names.
540 """540 """
@@ -555,7 +555,7 @@
555 :returns: A list of relation names.555 :returns: A list of relation names.
556 """556 """
557 results = []557 results = []
558 for role in ('provides', 'requires', 'peer'):558 for role in ('provides', 'requires', 'peers'):
559 results.extend(role_and_interface_to_relations(role, interface_name))559 results.extend(role_and_interface_to_relations(role, interface_name))
560 return results560 return results
561561
@@ -637,7 +637,7 @@
637637
638638
639@cached639@cached
640def storage_get(attribute="", storage_id=""):640def storage_get(attribute=None, storage_id=None):
641 """Get storage attributes"""641 """Get storage attributes"""
642 _args = ['storage-get', '--format=json']642 _args = ['storage-get', '--format=json']
643 if storage_id:643 if storage_id:
@@ -651,7 +651,7 @@
651651
652652
653@cached653@cached
654def storage_list(storage_name=""):654def storage_list(storage_name=None):
655 """List the storage IDs for the unit"""655 """List the storage IDs for the unit"""
656 _args = ['storage-list', '--format=json']656 _args = ['storage-list', '--format=json']
657 if storage_name:657 if storage_name:
@@ -878,6 +878,40 @@
878 subprocess.check_call(cmd)878 subprocess.check_call(cmd)
879879
880880
881@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
882def payload_register(ptype, klass, pid):
883 """ is used while a hook is running to let Juju know that a
884 payload has been started."""
885 cmd = ['payload-register']
886 for x in [ptype, klass, pid]:
887 cmd.append(x)
888 subprocess.check_call(cmd)
889
890
891@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
892def payload_unregister(klass, pid):
893 """ is used while a hook is running to let Juju know
894 that a payload has been manually stopped. The <class> and <id> provided
895 must match a payload that has been previously registered with juju using
896 payload-register."""
897 cmd = ['payload-unregister']
898 for x in [klass, pid]:
899 cmd.append(x)
900 subprocess.check_call(cmd)
901
902
903@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
904def payload_status_set(klass, pid, status):
905 """is used to update the current status of a registered payload.
906 The <class> and <id> provided must match a payload that has been previously
907 registered with juju using payload-register. The <status> must be one of the
908 follow: starting, started, stopping, stopped"""
909 cmd = ['payload-status-set']
910 for x in [klass, pid, status]:
911 cmd.append(x)
912 subprocess.check_call(cmd)
913
914
881@cached915@cached
882def juju_version():916def juju_version():
883 """Full version string (eg. '1.23.3.1-trusty-amd64')"""917 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
884918
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/host.py 2016-02-22 12:50:59 +0000
@@ -67,10 +67,14 @@
67 """Pause a system service.67 """Pause a system service.
6868
69 Stop it, and prevent it from starting again at boot."""69 Stop it, and prevent it from starting again at boot."""
70 stopped = service_stop(service_name)70 stopped = True
71 if service_running(service_name):
72 stopped = service_stop(service_name)
71 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))73 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
72 sysv_file = os.path.join(initd_dir, service_name)74 sysv_file = os.path.join(initd_dir, service_name)
73 if os.path.exists(upstart_file):75 if init_is_systemd():
76 service('disable', service_name)
77 elif os.path.exists(upstart_file):
74 override_path = os.path.join(78 override_path = os.path.join(
75 init_dir, '{}.override'.format(service_name))79 init_dir, '{}.override'.format(service_name))
76 with open(override_path, 'w') as fh:80 with open(override_path, 'w') as fh:
@@ -78,9 +82,9 @@
78 elif os.path.exists(sysv_file):82 elif os.path.exists(sysv_file):
79 subprocess.check_call(["update-rc.d", service_name, "disable"])83 subprocess.check_call(["update-rc.d", service_name, "disable"])
80 else:84 else:
81 # XXX: Support SystemD too
82 raise ValueError(85 raise ValueError(
83 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(86 "Unable to detect {0} as SystemD, Upstart {1} or"
87 " SysV {2}".format(
84 service_name, upstart_file, sysv_file))88 service_name, upstart_file, sysv_file))
85 return stopped89 return stopped
8690
@@ -92,7 +96,9 @@
92 Reenable starting again at boot. Start the service"""96 Reenable starting again at boot. Start the service"""
93 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))97 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
94 sysv_file = os.path.join(initd_dir, service_name)98 sysv_file = os.path.join(initd_dir, service_name)
95 if os.path.exists(upstart_file):99 if init_is_systemd():
100 service('enable', service_name)
101 elif os.path.exists(upstart_file):
96 override_path = os.path.join(102 override_path = os.path.join(
97 init_dir, '{}.override'.format(service_name))103 init_dir, '{}.override'.format(service_name))
98 if os.path.exists(override_path):104 if os.path.exists(override_path):
@@ -100,34 +106,43 @@
100 elif os.path.exists(sysv_file):106 elif os.path.exists(sysv_file):
101 subprocess.check_call(["update-rc.d", service_name, "enable"])107 subprocess.check_call(["update-rc.d", service_name, "enable"])
102 else:108 else:
103 # XXX: Support SystemD too
104 raise ValueError(109 raise ValueError(
105 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(110 "Unable to detect {0} as SystemD, Upstart {1} or"
111 " SysV {2}".format(
106 service_name, upstart_file, sysv_file))112 service_name, upstart_file, sysv_file))
107113
108 started = service_start(service_name)114 started = service_running(service_name)
115 if not started:
116 started = service_start(service_name)
109 return started117 return started
110118
111119
112def service(action, service_name):120def service(action, service_name):
113 """Control a system service"""121 """Control a system service"""
114 cmd = ['service', service_name, action]122 if init_is_systemd():
123 cmd = ['systemctl', action, service_name]
124 else:
125 cmd = ['service', service_name, action]
115 return subprocess.call(cmd) == 0126 return subprocess.call(cmd) == 0
116127
117128
118def service_running(service):129def service_running(service_name):
119 """Determine whether a system service is running"""130 """Determine whether a system service is running"""
120 try:131 if init_is_systemd():
121 output = subprocess.check_output(132 return service('is-active', service_name)
122 ['service', service, 'status'],
123 stderr=subprocess.STDOUT).decode('UTF-8')
124 except subprocess.CalledProcessError:
125 return False
126 else:133 else:
127 if ("start/running" in output or "is running" in output):134 try:
128 return True135 output = subprocess.check_output(
129 else:136 ['service', service_name, 'status'],
137 stderr=subprocess.STDOUT).decode('UTF-8')
138 except subprocess.CalledProcessError:
130 return False139 return False
140 else:
141 if ("start/running" in output or "is running" in output or
142 "up and running" in output):
143 return True
144 else:
145 return False
131146
132147
133def service_available(service_name):148def service_available(service_name):
@@ -142,8 +157,29 @@
142 return True157 return True
143158
144159
145def adduser(username, password=None, shell='/bin/bash', system_user=False):160SYSTEMD_SYSTEM = '/run/systemd/system'
146 """Add a user to the system"""161
162
163def init_is_systemd():
164 """Return True if the host system uses systemd, False otherwise."""
165 return os.path.isdir(SYSTEMD_SYSTEM)
166
167
168def adduser(username, password=None, shell='/bin/bash', system_user=False,
169 primary_group=None, secondary_groups=None):
170 """Add a user to the system.
171
172 Will log but otherwise succeed if the user already exists.
173
174 :param str username: Username to create
175 :param str password: Password for user; if ``None``, create a system user
176 :param str shell: The default shell for the user
177 :param bool system_user: Whether to create a login or system user
178 :param str primary_group: Primary group for user; defaults to username
179 :param list secondary_groups: Optional list of additional groups
180
181 :returns: The password database entry struct, as returned by `pwd.getpwnam`
182 """
147 try:183 try:
148 user_info = pwd.getpwnam(username)184 user_info = pwd.getpwnam(username)
149 log('user {0} already exists!'.format(username))185 log('user {0} already exists!'.format(username))
@@ -158,6 +194,16 @@
158 '--shell', shell,194 '--shell', shell,
159 '--password', password,195 '--password', password,
160 ])196 ])
197 if not primary_group:
198 try:
199 grp.getgrnam(username)
200 primary_group = username # avoid "group exists" error
201 except KeyError:
202 pass
203 if primary_group:
204 cmd.extend(['-g', primary_group])
205 if secondary_groups:
206 cmd.extend(['-G', ','.join(secondary_groups)])
161 cmd.append(username)207 cmd.append(username)
162 subprocess.check_call(cmd)208 subprocess.check_call(cmd)
163 user_info = pwd.getpwnam(username)209 user_info = pwd.getpwnam(username)
@@ -255,14 +301,12 @@
255301
256302
257def fstab_remove(mp):303def fstab_remove(mp):
258 """Remove the given mountpoint entry from /etc/fstab304 """Remove the given mountpoint entry from /etc/fstab"""
259 """
260 return Fstab.remove_by_mountpoint(mp)305 return Fstab.remove_by_mountpoint(mp)
261306
262307
263def fstab_add(dev, mp, fs, options=None):308def fstab_add(dev, mp, fs, options=None):
264 """Adds the given device entry to the /etc/fstab file309 """Adds the given device entry to the /etc/fstab file"""
265 """
266 return Fstab.add(dev, mp, fs, options=options)310 return Fstab.add(dev, mp, fs, options=options)
267311
268312
@@ -318,8 +362,7 @@
318362
319363
320def file_hash(path, hash_type='md5'):364def file_hash(path, hash_type='md5'):
321 """365 """Generate a hash checksum of the contents of 'path' or None if not found.
322 Generate a hash checksum of the contents of 'path' or None if not found.
323366
324 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,367 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
325 such as md5, sha1, sha256, sha512, etc.368 such as md5, sha1, sha256, sha512, etc.
@@ -334,10 +377,9 @@
334377
335378
336def path_hash(path):379def path_hash(path):
337 """380 """Generate a hash checksum of all files matching 'path'. Standard
338 Generate a hash checksum of all files matching 'path'. Standard wildcards381 wildcards like '*' and '?' are supported, see documentation for the 'glob'
339 like '*' and '?' are supported, see documentation for the 'glob' module for382 module for more information.
340 more information.
341383
342 :return: dict: A { filename: hash } dictionary for all matched files.384 :return: dict: A { filename: hash } dictionary for all matched files.
343 Empty if none found.385 Empty if none found.
@@ -349,8 +391,7 @@
349391
350392
351def check_hash(path, checksum, hash_type='md5'):393def check_hash(path, checksum, hash_type='md5'):
352 """394 """Validate a file using a cryptographic checksum.
353 Validate a file using a cryptographic checksum.
354395
355 :param str checksum: Value of the checksum used to validate the file.396 :param str checksum: Value of the checksum used to validate the file.
356 :param str hash_type: Hash algorithm used to generate `checksum`.397 :param str hash_type: Hash algorithm used to generate `checksum`.
@@ -365,6 +406,7 @@
365406
366407
367class ChecksumError(ValueError):408class ChecksumError(ValueError):
409 """A class derived from Value error to indicate the checksum failed."""
368 pass410 pass
369411
370412
@@ -470,7 +512,7 @@
470512
471513
472def list_nics(nic_type=None):514def list_nics(nic_type=None):
473 '''Return a list of nics of given type(s)'''515 """Return a list of nics of given type(s)"""
474 if isinstance(nic_type, six.string_types):516 if isinstance(nic_type, six.string_types):
475 int_types = [nic_type]517 int_types = [nic_type]
476 else:518 else:
@@ -512,12 +554,13 @@
512554
513555
514def set_nic_mtu(nic, mtu):556def set_nic_mtu(nic, mtu):
515 '''Set MTU on a network interface'''557 """Set the Maximum Transmission Unit (MTU) on a network interface."""
516 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]558 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
517 subprocess.check_call(cmd)559 subprocess.check_call(cmd)
518560
519561
520def get_nic_mtu(nic):562def get_nic_mtu(nic):
563 """Return the Maximum Transmission Unit (MTU) for a network interface."""
521 cmd = ['ip', 'addr', 'show', nic]564 cmd = ['ip', 'addr', 'show', nic]
522 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')565 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
523 mtu = ""566 mtu = ""
@@ -529,6 +572,7 @@
529572
530573
531def get_nic_hwaddr(nic):574def get_nic_hwaddr(nic):
575 """Return the Media Access Control (MAC) for a network interface."""
532 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]576 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
533 ip_output = subprocess.check_output(cmd).decode('UTF-8')577 ip_output = subprocess.check_output(cmd).decode('UTF-8')
534 hwaddr = ""578 hwaddr = ""
@@ -539,7 +583,7 @@
539583
540584
541def cmp_pkgrevno(package, revno, pkgcache=None):585def cmp_pkgrevno(package, revno, pkgcache=None):
542 '''Compare supplied revno with the revno of the installed package586 """Compare supplied revno with the revno of the installed package
543587
544 * 1 => Installed revno is greater than supplied arg588 * 1 => Installed revno is greater than supplied arg
545 * 0 => Installed revno is the same as supplied arg589 * 0 => Installed revno is the same as supplied arg
@@ -548,7 +592,7 @@
548 This function imports apt_cache function from charmhelpers.fetch if592 This function imports apt_cache function from charmhelpers.fetch if
549 the pkgcache argument is None. Be sure to add charmhelpers.fetch if593 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
550 you call this function, or pass an apt_pkg.Cache() instance.594 you call this function, or pass an apt_pkg.Cache() instance.
551 '''595 """
552 import apt_pkg596 import apt_pkg
553 if not pkgcache:597 if not pkgcache:
554 from charmhelpers.fetch import apt_cache598 from charmhelpers.fetch import apt_cache
@@ -558,19 +602,27 @@
558602
559603
560@contextmanager604@contextmanager
561def chdir(d):605def chdir(directory):
606 """Change the current working directory to a different directory for a code
607 block and return the previous directory after the block exits. Useful to
608 run commands from a specificed directory.
609
610 :param str directory: The directory path to change to for this context.
611 """
562 cur = os.getcwd()612 cur = os.getcwd()
563 try:613 try:
564 yield os.chdir(d)614 yield os.chdir(directory)
565 finally:615 finally:
566 os.chdir(cur)616 os.chdir(cur)
567617
568618
569def chownr(path, owner, group, follow_links=True, chowntopdir=False):619def chownr(path, owner, group, follow_links=True, chowntopdir=False):
570 """620 """Recursively change user and group ownership of files and directories
571 Recursively change user and group ownership of files and directories
572 in given path. Doesn't chown path itself by default, only its children.621 in given path. Doesn't chown path itself by default, only its children.
573622
623 :param str path: The string path to start changing ownership.
624 :param str owner: The owner string to use when looking up the uid.
625 :param str group: The group string to use when looking up the gid.
574 :param bool follow_links: Also Chown links if True626 :param bool follow_links: Also Chown links if True
575 :param bool chowntopdir: Also chown path itself if True627 :param bool chowntopdir: Also chown path itself if True
576 """628 """
@@ -594,15 +646,23 @@
594646
595647
596def lchownr(path, owner, group):648def lchownr(path, owner, group):
649 """Recursively change user and group ownership of files and directories
650 in a given path, not following symbolic links. See the documentation for
651 'os.lchown' for more information.
652
653 :param str path: The string path to start changing ownership.
654 :param str owner: The owner string to use when looking up the uid.
655 :param str group: The group string to use when looking up the gid.
656 """
597 chownr(path, owner, group, follow_links=False)657 chownr(path, owner, group, follow_links=False)
598658
599659
600def get_total_ram():660def get_total_ram():
601 '''The total amount of system RAM in bytes.661 """The total amount of system RAM in bytes.
602662
603 This is what is reported by the OS, and may be overcommitted when663 This is what is reported by the OS, and may be overcommitted when
604 there are multiple containers hosted on the same machine.664 there are multiple containers hosted on the same machine.
605 '''665 """
606 with open('/proc/meminfo', 'r') as f:666 with open('/proc/meminfo', 'r') as f:
607 for line in f.readlines():667 for line in f.readlines():
608 if line:668 if line:
609669
=== modified file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2016-02-22 12:50:59 +0000
@@ -243,13 +243,15 @@
243 :param str source: The template source file, relative to243 :param str source: The template source file, relative to
244 `$CHARM_DIR/templates`244 `$CHARM_DIR/templates`
245245
246 :param str target: The target to write the rendered template to246 :param str target: The target to write the rendered template to (or None)
247 :param str owner: The owner of the rendered file247 :param str owner: The owner of the rendered file
248 :param str group: The group of the rendered file248 :param str group: The group of the rendered file
249 :param int perms: The permissions of the rendered file249 :param int perms: The permissions of the rendered file
250 :param partial on_change_action: functools partial to be executed when250 :param partial on_change_action: functools partial to be executed when
251 rendered file changes251 rendered file changes
252 :param jinja2 loader template_loader: A jinja2 template loader252 :param jinja2 loader template_loader: A jinja2 template loader
253
254 :return str: The rendered template
253 """255 """
254 def __init__(self, source, target,256 def __init__(self, source, target,
255 owner='root', group='root', perms=0o444,257 owner='root', group='root', perms=0o444,
@@ -267,12 +269,14 @@
267 if self.on_change_action and os.path.isfile(self.target):269 if self.on_change_action and os.path.isfile(self.target):
268 pre_checksum = host.file_hash(self.target)270 pre_checksum = host.file_hash(self.target)
269 service = manager.get_service(service_name)271 service = manager.get_service(service_name)
270 context = {}272 context = {'ctx': {}}
271 for ctx in service.get('required_data', []):273 for ctx in service.get('required_data', []):
272 context.update(ctx)274 context.update(ctx)
273 templating.render(self.source, self.target, context,275 context['ctx'].update(ctx)
274 self.owner, self.group, self.perms,276
275 template_loader=self.template_loader)277 result = templating.render(self.source, self.target, context,
278 self.owner, self.group, self.perms,
279 template_loader=self.template_loader)
276 if self.on_change_action:280 if self.on_change_action:
277 if pre_checksum == host.file_hash(self.target):281 if pre_checksum == host.file_hash(self.target):
278 hookenv.log(282 hookenv.log(
@@ -281,6 +285,8 @@
281 else:285 else:
282 self.on_change_action()286 self.on_change_action()
283287
288 return result
289
284290
285# Convenience aliases for templates291# Convenience aliases for templates
286render_template = template = TemplateCallback292render_template = template = TemplateCallback
287293
=== modified file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/templating.py 2016-02-22 12:50:59 +0000
@@ -27,7 +27,8 @@
2727
28 The `source` path, if not absolute, is relative to the `templates_dir`.28 The `source` path, if not absolute, is relative to the `templates_dir`.
2929
30 The `target` path should be absolute.30 The `target` path should be absolute. It can also be `None`, in which
31 case no file will be written.
3132
32 The context should be a dict containing the values to be replaced in the33 The context should be a dict containing the values to be replaced in the
33 template.34 template.
@@ -36,6 +37,9 @@
3637
37 If omitted, `templates_dir` defaults to the `templates` folder in the charm.38 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
3839
40 The rendered template will be written to the file as well as being returned
41 as a string.
42
39 Note: Using this requires python-jinja2; if it is not installed, calling43 Note: Using this requires python-jinja2; if it is not installed, calling
40 this will attempt to use charmhelpers.fetch.apt_install to install it.44 this will attempt to use charmhelpers.fetch.apt_install to install it.
41 """45 """
@@ -67,9 +71,11 @@
67 level=hookenv.ERROR)71 level=hookenv.ERROR)
68 raise e72 raise e
69 content = template.render(context)73 content = template.render(context)
70 target_dir = os.path.dirname(target)74 if target is not None:
71 if not os.path.exists(target_dir):75 target_dir = os.path.dirname(target)
72 # This is a terrible default directory permission, as the file76 if not os.path.exists(target_dir):
73 # or its siblings will often contain secrets.77 # This is a terrible default directory permission, as the file
74 host.mkdir(os.path.dirname(target), owner, group, perms=0o755)78 # or its siblings will often contain secrets.
75 host.write_file(target, content.encode(encoding), owner, group, perms)79 host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
80 host.write_file(target, content.encode(encoding), owner, group, perms)
81 return content
7682
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2016-02-22 12:50:59 +0000
@@ -98,6 +98,14 @@
98 'liberty/proposed': 'trusty-proposed/liberty',98 'liberty/proposed': 'trusty-proposed/liberty',
99 'trusty-liberty/proposed': 'trusty-proposed/liberty',99 'trusty-liberty/proposed': 'trusty-proposed/liberty',
100 'trusty-proposed/liberty': 'trusty-proposed/liberty',100 'trusty-proposed/liberty': 'trusty-proposed/liberty',
101 # Mitaka
102 'mitaka': 'trusty-updates/mitaka',
103 'trusty-mitaka': 'trusty-updates/mitaka',
104 'trusty-mitaka/updates': 'trusty-updates/mitaka',
105 'trusty-updates/mitaka': 'trusty-updates/mitaka',
106 'mitaka/proposed': 'trusty-proposed/mitaka',
107 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
108 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
101}109}
102110
103# The order of this list is very important. Handlers should be listed in from111# The order of this list is very important. Handlers should be listed in from
@@ -411,7 +419,7 @@
411 importlib.import_module(package),419 importlib.import_module(package),
412 classname)420 classname)
413 plugin_list.append(handler_class())421 plugin_list.append(handler_class())
414 except (ImportError, AttributeError):422 except NotImplementedError:
415 # Skip missing plugins so that they can be ommitted from423 # Skip missing plugins so that they can be ommitted from
416 # installation if desired424 # installation if desired
417 log("FetchHandler {} not found, skipping plugin".format(425 log("FetchHandler {} not found, skipping plugin".format(
418426
=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2016-02-22 12:50:59 +0000
@@ -108,7 +108,7 @@
108 install_opener(opener)108 install_opener(opener)
109 response = urlopen(source)109 response = urlopen(source)
110 try:110 try:
111 with open(dest, 'w') as dest_file:111 with open(dest, 'wb') as dest_file:
112 dest_file.write(response.read())112 dest_file.write(response.read())
113 except Exception as e:113 except Exception as e:
114 if os.path.isfile(dest):114 if os.path.isfile(dest):
115115
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2016-02-22 12:50:59 +0000
@@ -15,60 +15,50 @@
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import os17import os
18from subprocess import check_call
18from charmhelpers.fetch import (19from charmhelpers.fetch import (
19 BaseFetchHandler,20 BaseFetchHandler,
20 UnhandledSource21 UnhandledSource,
22 filter_installed_packages,
23 apt_install,
21)24)
22from charmhelpers.core.host import mkdir25from charmhelpers.core.host import mkdir
2326
24import six
25if six.PY3:
26 raise ImportError('bzrlib does not support Python3')
2727
28try:28if filter_installed_packages(['bzr']) != []:
29 from bzrlib.branch import Branch29 apt_install(['bzr'])
30 from bzrlib import bzrdir, workingtree, errors30 if filter_installed_packages(['bzr']) != []:
31except ImportError:31 raise NotImplementedError('Unable to install bzr')
32 from charmhelpers.fetch import apt_install
33 apt_install("python-bzrlib")
34 from bzrlib.branch import Branch
35 from bzrlib import bzrdir, workingtree, errors
3632
3733
38class BzrUrlFetchHandler(BaseFetchHandler):34class BzrUrlFetchHandler(BaseFetchHandler):
39 """Handler for bazaar branches via generic and lp URLs"""35 """Handler for bazaar branches via generic and lp URLs"""
40 def can_handle(self, source):36 def can_handle(self, source):
41 url_parts = self.parse_url(source)37 url_parts = self.parse_url(source)
42 if url_parts.scheme not in ('bzr+ssh', 'lp'):38 if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
43 return False39 return False
40 elif not url_parts.scheme:
41 return os.path.exists(os.path.join(source, '.bzr'))
44 else:42 else:
45 return True43 return True
4644
47 def branch(self, source, dest):45 def branch(self, source, dest):
48 url_parts = self.parse_url(source)
49 # If we use lp:branchname scheme we need to load plugins
50 if not self.can_handle(source):46 if not self.can_handle(source):
51 raise UnhandledSource("Cannot handle {}".format(source))47 raise UnhandledSource("Cannot handle {}".format(source))
52 if url_parts.scheme == "lp":48 if os.path.exists(dest):
53 from bzrlib.plugin import load_plugins49 check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
54 load_plugins()50 else:
55 try:51 check_call(['bzr', 'branch', source, dest])
56 local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
57 except errors.AlreadyControlDirError:
58 local_branch = Branch.open(dest)
59 try:
60 remote_branch = Branch.open(source)
61 remote_branch.push(local_branch)
62 tree = workingtree.WorkingTree.open(dest)
63 tree.update()
64 except Exception as e:
65 raise e
6652
67 def install(self, source):53 def install(self, source, dest=None):
68 url_parts = self.parse_url(source)54 url_parts = self.parse_url(source)
69 branch_name = url_parts.path.strip("/").split("/")[-1]55 branch_name = url_parts.path.strip("/").split("/")[-1]
70 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",56 if dest:
71 branch_name)57 dest_dir = os.path.join(dest, branch_name)
58 else:
59 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
60 branch_name)
61
72 if not os.path.exists(dest_dir):62 if not os.path.exists(dest_dir):
73 mkdir(dest_dir, perms=0o755)63 mkdir(dest_dir, perms=0o755)
74 try:64 try:
7565
=== modified file 'hooks/charmhelpers/fetch/giturl.py'
--- hooks/charmhelpers/fetch/giturl.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/fetch/giturl.py 2016-02-22 12:50:59 +0000
@@ -15,24 +15,18 @@
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import os17import os
18from subprocess import check_call, CalledProcessError
18from charmhelpers.fetch import (19from charmhelpers.fetch import (
19 BaseFetchHandler,20 BaseFetchHandler,
20 UnhandledSource21 UnhandledSource,
22 filter_installed_packages,
23 apt_install,
21)24)
22from charmhelpers.core.host import mkdir25
2326if filter_installed_packages(['git']) != []:
24import six27 apt_install(['git'])
25if six.PY3:28 if filter_installed_packages(['git']) != []:
26 raise ImportError('GitPython does not support Python 3')29 raise NotImplementedError('Unable to install git')
27
28try:
29 from git import Repo
30except ImportError:
31 from charmhelpers.fetch import apt_install
32 apt_install("python-git")
33 from git import Repo
34
35from git.exc import GitCommandError # noqa E402
3630
3731
38class GitUrlFetchHandler(BaseFetchHandler):32class GitUrlFetchHandler(BaseFetchHandler):
@@ -40,19 +34,24 @@
40 def can_handle(self, source):34 def can_handle(self, source):
41 url_parts = self.parse_url(source)35 url_parts = self.parse_url(source)
42 # TODO (mattyw) no support for ssh git@ yet36 # TODO (mattyw) no support for ssh git@ yet
43 if url_parts.scheme not in ('http', 'https', 'git'):37 if url_parts.scheme not in ('http', 'https', 'git', ''):
44 return False38 return False
39 elif not url_parts.scheme:
40 return os.path.exists(os.path.join(source, '.git'))
45 else:41 else:
46 return True42 return True
4743
48 def clone(self, source, dest, branch, depth=None):44 def clone(self, source, dest, branch="master", depth=None):
49 if not self.can_handle(source):45 if not self.can_handle(source):
50 raise UnhandledSource("Cannot handle {}".format(source))46 raise UnhandledSource("Cannot handle {}".format(source))
5147
52 if depth:48 if os.path.exists(dest):
53 Repo.clone_from(source, dest, branch=branch, depth=depth)49 cmd = ['git', '-C', dest, 'pull', source, branch]
54 else:50 else:
55 Repo.clone_from(source, dest, branch=branch)51 cmd = ['git', 'clone', source, dest, '--branch', branch]
52 if depth:
53 cmd.extend(['--depth', depth])
54 check_call(cmd)
5655
57 def install(self, source, branch="master", dest=None, depth=None):56 def install(self, source, branch="master", dest=None, depth=None):
58 url_parts = self.parse_url(source)57 url_parts = self.parse_url(source)
@@ -62,11 +61,9 @@
62 else:61 else:
63 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",62 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
64 branch_name)63 branch_name)
65 if not os.path.exists(dest_dir):
66 mkdir(dest_dir, perms=0o755)
67 try:64 try:
68 self.clone(source, dest_dir, branch, depth)65 self.clone(source, dest_dir, branch, depth)
69 except GitCommandError as e:66 except CalledProcessError as e:
70 raise UnhandledSource(e)67 raise UnhandledSource(e)
71 except OSError as e:68 except OSError as e:
72 raise UnhandledSource(e.strerror)69 raise UnhandledSource(e.strerror)
7370
=== modified file 'hooks/odl_controller_hooks.py'
--- hooks/odl_controller_hooks.py 2015-11-04 19:14:40 +0000
+++ hooks/odl_controller_hooks.py 2016-02-22 12:50:59 +0000
@@ -18,48 +18,57 @@
18 adduser,18 adduser,
19 mkdir,19 mkdir,
20 restart_on_change,20 restart_on_change,
21 service_start21 service_start,
22 service,
23 init_is_systemd,
22)24)
2325
24from charmhelpers.fetch import (26from charmhelpers.fetch import (
25 configure_sources, apt_install, install_remote)27 configure_sources, apt_install, install_remote)
2628
27from odl_controller_utils import write_mvn_config, process_odl_cmds29from odl_controller_utils import (
28from odl_controller_utils import PROFILES30 write_mvn_config,
31 process_odl_cmds,
32 PROFILES,
33 configure_admin_user,
34 admin_user_password,
35 assess_status
36)
2937
30PACKAGES = ["default-jre-headless", "python-jinja2"]38PACKAGES = ["default-jre-headless", "python-jinja2"]
31KARAF_PACKAGE = "opendaylight-karaf"39KARAF_PACKAGE = "opendaylight-karaf"
3240
33hooks = Hooks()41hooks = Hooks()
34config = config()
3542
3643
37@hooks.hook("config-changed")44@hooks.hook("config-changed")
38@restart_on_change({"/home/opendaylight/.m2/settings.xml": ["odl-controller"]})45@restart_on_change({"/home/opendaylight/.m2/settings.xml": ["odl-controller"]})
39def config_changed():46def config_changed():
40 process_odl_cmds(PROFILES[config["profile"]])47 process_odl_cmds(PROFILES[config("profile")])
48 write_mvn_config()
49 configure_admin_user()
41 for r_id in relation_ids("controller-api"):50 for r_id in relation_ids("controller-api"):
42 controller_api_joined(r_id)51 controller_api_joined(r_id)
43 write_mvn_config()
4452
4553
46@hooks.hook("controller-api-relation-joined")54@hooks.hook("controller-api-relation-joined")
47def controller_api_joined(r_id=None):55def controller_api_joined(r_id=None):
48 relation_set(relation_id=r_id,56 relation_set(relation_id=r_id,
49 port=PROFILES[config["profile"]]["port"],57 port=PROFILES[config("profile")]["port"],
50 username="admin", password="admin")58 username="admin",
59 password=admin_user_password())
5160
5261
53@hooks.hook()62@hooks.hook()
54def install():63def install():
55 if config.get("install-sources"):64 if config("install-sources"):
56 configure_sources(update=True, sources_var="install-sources",65 configure_sources(update=True, sources_var="install-sources",
57 keys_var="install-keys")66 keys_var="install-keys")
5867
59 # install packages68 # install packages
60 apt_install(PACKAGES, fatal=True)69 apt_install(PACKAGES, fatal=True)
6170
62 install_url = config["install-url"]71 install_url = config("install-url")
63 if install_url:72 if install_url:
64 # install opendaylight from tarball73 # install opendaylight from tarball
6574
@@ -76,7 +85,12 @@
76 apt_install([KARAF_PACKAGE], fatal=True)85 apt_install([KARAF_PACKAGE], fatal=True)
77 install_dir_name = "opendaylight-karaf"86 install_dir_name = "opendaylight-karaf"
7887
79 shutil.copy("files/odl-controller.conf", "/etc/init")88 if init_is_systemd():
89 shutil.copy("files/odl-controller.service", "/lib/systemd/system")
90 service('enable', 'odl-controller')
91 else:
92 shutil.copy("files/odl-controller.conf", "/etc/init")
93
80 adduser("opendaylight", system_user=True)94 adduser("opendaylight", system_user=True)
81 mkdir("/home/opendaylight", owner="opendaylight", group="opendaylight",95 mkdir("/home/opendaylight", owner="opendaylight", group="opendaylight",
82 perms=0755)96 perms=0755)
@@ -96,6 +110,7 @@
96 hooks.execute(sys.argv)110 hooks.execute(sys.argv)
97 except UnregisteredHookError as e:111 except UnregisteredHookError as e:
98 log("Unknown hook {} - skipping.".format(e))112 log("Unknown hook {} - skipping.".format(e))
113 assess_status()
99114
100115
101@hooks.hook("ovsdb-manager-relation-joined")116@hooks.hook("ovsdb-manager-relation-joined")
102117
=== modified file 'hooks/odl_controller_utils.py'
--- hooks/odl_controller_utils.py 2016-02-19 14:23:05 +0000
+++ hooks/odl_controller_utils.py 2016-02-22 12:50:59 +0000
@@ -3,9 +3,12 @@
3import urlparse3import urlparse
44
5from charmhelpers.core.templating import render5from charmhelpers.core.templating import render
6from charmhelpers.core.hookenv import config6from charmhelpers.core.hookenv import config, status_set
7from charmhelpers.core.decorators import retry_on_exception7from charmhelpers.core.decorators import retry_on_exception
8from charmhelpers.core.host import pwgen, service_running
9from charmhelpers.core.unitdata import kv
810
11from odl_helper import ODLAccountHelper, ODLInteractionError
912
10PROFILES = {13PROFILES = {
11 "cisco-vpp": {14 "cisco-vpp": {
@@ -18,7 +21,8 @@
18 "port": 818121 "port": 8181
19 },22 },
20 "openvswitch-odl": {23 "openvswitch-odl": {
21 "feature:install": ["odl-base-all", "odl-aaa-authn",24 "feature:install": ["odl-base-all",
25 "odl-aaa-api", "odl-aaa-authn",
22 "odl-restconf", "odl-nsf-all",26 "odl-restconf", "odl-nsf-all",
23 "odl-adsal-northbound",27 "odl-adsal-northbound",
24 "odl-mdsal-apidocs",28 "odl-mdsal-apidocs",
@@ -28,22 +32,29 @@
28 "port": 808032 "port": 8080
29 },33 },
30 "openvswitch-odl-lithium": {34 "openvswitch-odl-lithium": {
31 "feature:install": ["odl-ovsdb-openstack"],35 "feature:install": ["odl-ovsdb-openstack",
36 "odl-restconf",
37 "odl-aaa-api",
38 "odl-aaa-authn",
39 "odl-dlux-core"],
32 "port": 808040 "port": 8080
33 },41 },
34 "openvswitch-odl-beryllium": {42 "openvswitch-odl-beryllium": {
35 "feature:install": ["odl-ovsdb-openstack",43 "feature:install": ["odl-ovsdb-openstack",
36 "odl-restconf",44 "odl-restconf",
45 "odl-aaa-api",
37 "odl-aaa-authn",46 "odl-aaa-authn",
38 "odl-dlux-all"],47 "odl-dlux-all"],
39 "port": 808048 "port": 8080
40 },49 },
41 "openvswitch-odl-beryllium-l3": {50 "openvswitch-odl-beryllium-l3": {
42 "feature:install": ["odl-ovsdb-openstack"],51 "feature:install": ["odl-ovsdb-openstack"
52 "odl-aaa-api"],
43 "port": 808053 "port": 8080
44 },54 },
45 "openvswitch-odl-beryllium-sfc": {55 "openvswitch-odl-beryllium-sfc": {
46 "feature:install": ["odl-ovsdb-openstack",56 "feature:install": ["odl-ovsdb-openstack",
57 "odl-aaa-api",
47 "odl-sfc-core",58 "odl-sfc-core",
48 "odl-sfc-sb-rest",59 "odl-sfc-sb-rest",
49 "odl-sfc-ui",60 "odl-sfc-ui",
@@ -55,6 +66,7 @@
55 },66 },
56 "openvswitch-odl-beryllium-vpn": {67 "openvswitch-odl-beryllium-vpn": {
57 "feature:install": ["odl-ovsdb-openstack",68 "feature:install": ["odl-ovsdb-openstack",
69 "odl-aaa-api",
58 "odl-vpnservice-api",70 "odl-vpnservice-api",
59 "odl-vpnservice-impl",71 "odl-vpnservice-impl",
60 "odl-vpnservice-impl-rest",72 "odl-vpnservice-impl-rest",
@@ -140,9 +152,50 @@
140def process_odl_cmds(odl_cmds):152def process_odl_cmds(odl_cmds):
141 features = filter_installed(odl_cmds.get("feature:install", []))153 features = filter_installed(odl_cmds.get("feature:install", []))
142 if features:154 if features:
155 status_set('maintenance', 'Installing requested features into karaf')
143 run_odl(["feature:install"] + features)156 run_odl(["feature:install"] + features)
144 logging = odl_cmds.get("log:set")157 logging = odl_cmds.get("log:set")
145 if logging:158 if logging:
146 for log_level in logging.keys():159 for log_level in logging.keys():
147 for target in logging[log_level]:160 for target in logging[log_level]:
148 run_odl(["log:set", log_level, target])161 run_odl(["log:set", log_level, target])
162
163
164DEFAULT_PASSWORD = 'admin'
165
166
167@retry_on_exception(5, base_delay=20, exc_type=ODLInteractionError)
168def configure_admin_user():
169 '''Configure password for admin user based on provided configuration'''
170 db = kv()
171 configured_admin_password = config('admin-password')
172 current_admin_password = db.get('admin-password',
173 default=DEFAULT_PASSWORD)
174
175 new_password = None
176 if (not current_admin_password.startswith('autogen(') and
177 not configured_admin_password):
178 # Auto generated password - can be overridden with a config-changed
179 new_password = "autogen({})".format(pwgen(length=32))
180 elif current_admin_password != configured_admin_password:
181 # Use configured admin password (new or changed)
182 new_password = configured_admin_password
183
184 if new_password:
185 status_set('maintenance', 'Updating password for admin user')
186 helper = ODLAccountHelper(password=current_admin_password)
187 helper.update_user(user='admin', password=new_password)
188 db.set('admin-password', new_password)
189 db.flush()
190
191
192def admin_user_password():
193 return kv().get('admin-password', default=DEFAULT_PASSWORD)
194
195
196def assess_status():
197 '''Determine status of current unit'''
198 if service_running('odl-controller'):
199 status_set('active', 'Unit is ready')
200 else:
201 status_set('blocked', 'OpenDayLight is not running')
149202
=== added file 'hooks/odl_helper.py'
--- hooks/odl_helper.py 1970-01-01 00:00:00 +0000
+++ hooks/odl_helper.py 2016-02-22 12:50:59 +0000
@@ -0,0 +1,96 @@
1import requests
2import json
3
4
5class ODLInteractionError(Exception):
6 '''Generic exception for failures with interacting with ODL'''
7 pass
8
9
10class ODLAccountHelper(object):
11 '''Helper class for interacting with the ODL AAA REST API'''
12
13 def __init__(self, host='localhost', port=8181,
14 user='admin', password='admin',
15 verify=False):
16 super(ODLAccountHelper, self).__init__()
17 self.host = host
18 self.port = port
19 self.user = user
20 self.password = password
21 self.session = requests.Session()
22 self.auth = requests.auth.HTTPBasicAuth(username=self.user,
23 password=self.password)
24 self.odl_base_url = (
25 "http://{}:{}/auth/v1".format(host, port)
26 )
27 self.session.verify = verify
28 self.session.headers.update({"content-type": "application/json"})
29
30 def get_users(self):
31 '''Retrieve a list of all users'''
32 response = self.session.get(url="{}/users".format(self.odl_base_url),
33 auth=self.auth)
34 if response.status_code not in [requests.codes.ok,
35 requests.codes.no_content]:
36 raise ODLInteractionError('Unable to retrieve users: {}'
37 .format(response.reason))
38 return json.loads(response.content)['users']
39
40 def get_user(self, user):
41 '''Retrieve details of a user'''
42 _users = self.get_users()
43 for _user in _users:
44 if _user['name'] == user:
45 return _user
46 raise ValueError('User {} does not found'.format(user))
47
48 def user_exists(self, user):
49 '''Determine if a user exists'''
50 try:
51 return self.get_user(user) is not None
52 except ValueError:
53 return False
54
55 def update_user(self, user, password, description=None,
56 email=None, enabled=True):
57 '''Update an existing user'''
58 user = self.get_user(user)
59 user['password'] = password
60 if description:
61 user['description'] = description
62 if email:
63 user['email'] = email
64 user['enabled'] = enabled
65 response = self.session.put(url="{}/users/{}".format(self.odl_base_url,
66 user['userid']),
67 data=json.dumps(user),
68 auth=self.auth)
69
70 if response.status_code not in [requests.codes.ok,
71 requests.codes.no_content]:
72 raise ODLInteractionError('Unable to update user: {}'
73 .format(response.reason))
74
75 if user == self.user:
76 self.password = password
77 return response.content
78
79 def create_user(self, user, password, description=None,
80 email=None, enabled=True):
81 if self.user_exists(user):
82 raise ValueError('User {} already exists'.format(user))
83 user = {
84 'name': user,
85 'password': password,
86 'description': description or '',
87 'email': email or '',
88 'enabled': enabled,
89 }
90 response = self.session.post(url="{}/users".format(self.odl_base_url),
91 auth=self.auth,
92 data=json.dumps(user))
93 if response.status_code not in [requests.codes.created]:
94 raise ODLInteractionError('Unable to create user: {}'
95 .format(response.reason))
96 return response.content
097
=== added symlink 'hooks/update-status'
=== target is u'odl_controller_hooks.py'
=== modified file 'tests/015-basic-trusty-icehouse'
--- tests/015-basic-trusty-icehouse 2015-11-11 14:57:38 +0000
+++ tests/015-basic-trusty-icehouse 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on trusty-icehouse."""3"""Amulet tests on a basic odl controller deployment on trusty-icehouse."""
44
55
=== modified file 'tests/016-basic-trusty-juno'
--- tests/016-basic-trusty-juno 2015-11-11 14:57:38 +0000
+++ tests/016-basic-trusty-juno 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on trusty-juno."""3"""Amulet tests on a basic odl controller deployment on trusty-juno."""
44
55
=== modified file 'tests/017-basic-trusty-kilo'
--- tests/017-basic-trusty-kilo 2015-11-11 14:57:38 +0000
+++ tests/017-basic-trusty-kilo 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on trusty-kilo."""3"""Amulet tests on a basic odl controller deployment on trusty-kilo."""
44
55
=== modified file 'tests/018-basic-trusty-liberty'
--- tests/018-basic-trusty-liberty 2015-11-11 19:54:58 +0000
+++ tests/018-basic-trusty-liberty 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on trusty-liberty."""3"""Amulet tests on a basic odl controller deployment on trusty-liberty."""
44
55
=== modified file 'tests/019-basic-trusty-mitaka' (properties changed: -x to +x)
--- tests/019-basic-trusty-mitaka 2016-01-19 12:47:39 +0000
+++ tests/019-basic-trusty-mitaka 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on trusty-mitaka."""3"""Amulet tests on a basic odl controller deployment on trusty-mitaka."""
44
55
=== modified file 'tests/020-basic-wily-liberty' (properties changed: -x to +x)
--- tests/020-basic-wily-liberty 2016-01-08 21:45:05 +0000
+++ tests/020-basic-wily-liberty 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on wily-liberty."""3"""Amulet tests on a basic odl controller deployment on wily-liberty."""
44
55
=== modified file 'tests/021-basic-xenial-mitaka'
--- tests/021-basic-xenial-mitaka 2016-01-08 21:45:05 +0000
+++ tests/021-basic-xenial-mitaka 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3"""Amulet tests on a basic odl controller deployment on xenial-mitaka."""3"""Amulet tests on a basic odl controller deployment on xenial-mitaka."""
44
55
=== modified file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 2016-02-19 18:08:58 +0000
+++ tests/basic_deployment.py 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/python1#!/usr/bin/env python
22
3import amulet3import amulet
4import os4import os
@@ -102,7 +102,7 @@
102 neutron_api_config = {'neutron-security-groups': 'False',102 neutron_api_config = {'neutron-security-groups': 'False',
103 'manage-neutron-plugin-legacy-mode': 'False'}103 'manage-neutron-plugin-legacy-mode': 'False'}
104 neutron_api_odl_config = {'overlay-network-type': 'vxlan gre'}104 neutron_api_odl_config = {'overlay-network-type': 'vxlan gre'}
105 odl_controller_config = {}105 odl_controller_config = {'admin-password': 'testpassword'}
106 if os.environ.get('AMULET_ODL_LOCATION'):106 if os.environ.get('AMULET_ODL_LOCATION'):
107 odl_controller_config['install-url'] = \107 odl_controller_config['install-url'] = \
108 os.environ['AMULET_ODL_LOCATION']108 os.environ['AMULET_ODL_LOCATION']
109109
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 19:54:58 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-22 12:50:59 +0000
@@ -121,11 +121,12 @@
121121
122 # Charms which should use the source config option122 # Charms which should use the source config option
123 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',123 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
124 'ceph-osd', 'ceph-radosgw']124 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
125125
126 # Charms which can not use openstack-origin, ie. many subordinates126 # Charms which can not use openstack-origin, ie. many subordinates
127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
129 'cinder-backup']
129130
130 if self.openstack:131 if self.openstack:
131 for svc in services:132 for svc in services:
@@ -225,7 +226,8 @@
225 self.precise_havana, self.precise_icehouse,226 self.precise_havana, self.precise_icehouse,
226 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,227 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
227 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,228 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
228 self.wily_liberty) = range(12)229 self.wily_liberty, self.trusty_mitaka,
230 self.xenial_mitaka) = range(14)
229231
230 releases = {232 releases = {
231 ('precise', None): self.precise_essex,233 ('precise', None): self.precise_essex,
@@ -237,9 +239,11 @@
237 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,239 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
238 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,240 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
239 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,241 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
242 ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
240 ('utopic', None): self.utopic_juno,243 ('utopic', None): self.utopic_juno,
241 ('vivid', None): self.vivid_kilo,244 ('vivid', None): self.vivid_kilo,
242 ('wily', None): self.wily_liberty}245 ('wily', None): self.wily_liberty,
246 ('xenial', None): self.xenial_mitaka}
243 return releases[(self.series, self.openstack)]247 return releases[(self.series, self.openstack)]
244248
245 def _get_openstack_release_string(self):249 def _get_openstack_release_string(self):
@@ -256,6 +260,7 @@
256 ('utopic', 'juno'),260 ('utopic', 'juno'),
257 ('vivid', 'kilo'),261 ('vivid', 'kilo'),
258 ('wily', 'liberty'),262 ('wily', 'liberty'),
263 ('xenial', 'mitaka'),
259 ])264 ])
260 if self.openstack:265 if self.openstack:
261 os_origin = self.openstack.split(':')[1]266 os_origin = self.openstack.split(':')[1]
262267
=== modified file 'unit_tests/test_odl_controller_hooks.py'
--- unit_tests/test_odl_controller_hooks.py 2015-11-11 14:57:38 +0000
+++ unit_tests/test_odl_controller_hooks.py 2016-02-22 12:50:59 +0000
@@ -21,6 +21,10 @@
21 'service_start',21 'service_start',
22 'shutil',22 'shutil',
23 'write_mvn_config',23 'write_mvn_config',
24 'configure_admin_user',
25 'admin_user_password',
26 'init_is_systemd',
27 'service',
24]28]
2529
2630
@@ -29,11 +33,11 @@
29 def setUp(self):33 def setUp(self):
30 super(ODLControllerHooksTests, self).setUp(hooks, TO_PATCH)34 super(ODLControllerHooksTests, self).setUp(hooks, TO_PATCH)
3135
32 self.config.__getitem__.side_effect = self.test_config.get36 self.config.side_effect = self.test_config.get
33 self.config.get.side_effect = self.test_config.get
34 self.install_url = 'http://10.10.10.10/distribution-karaf.tgz'37 self.install_url = 'http://10.10.10.10/distribution-karaf.tgz'
35 self.test_config.set('install-url', self.install_url)38 self.test_config.set('install-url', self.install_url)
36 self.test_config.set('profile', 'default')39 self.test_config.set('profile', 'default')
40 self.init_is_systemd.return_value = False
3741
38 def _call_hook(self, hookname):42 def _call_hook(self, hookname):
39 hooks.hooks.execute([43 hooks.hooks.execute([
@@ -68,11 +72,44 @@
68 self.shutil.copy.assert_called_with('files/odl-controller.conf',72 self.shutil.copy.assert_called_with('files/odl-controller.conf',
69 '/etc/init')73 '/etc/init')
7074
75 @patch('os.symlink')
76 @patch('os.path.exists')
77 @patch('os.listdir')
78 def test_install_hook_systemd(self, mock_listdir,
79 mock_path_exists, mock_symlink):
80 self.init_is_systemd.return_value = True
81 mock_listdir.return_value = ['random-file', 'distribution-karaf.tgz']
82 mock_path_exists.return_value = False
83 self._call_hook('install')
84 self.apt_install.assert_called_with([
85 "default-jre-headless", "python-jinja2"],
86 fatal=True
87 )
88 mock_symlink.assert_called_with('distribution-karaf.tgz',
89 '/opt/opendaylight-karaf')
90 self.adduser.assert_called_with("opendaylight", system_user=True)
91 self.mkdir.assert_has_calls([
92 call('/home/opendaylight', owner="opendaylight",
93 group="opendaylight", perms=0755),
94 call('/var/log/opendaylight', owner="opendaylight",
95 group="opendaylight", perms=0755)
96 ])
97 self.check_call.assert_called_with([
98 "chown", "-R", "opendaylight:opendaylight",
99 "/opt/distribution-karaf.tgz"
100 ])
101 self.write_mvn_config.assert_called_with()
102 self.service_start.assert_called_with('odl-controller')
103 self.shutil.copy.assert_called_with('files/odl-controller.service',
104 '/lib/systemd/system')
105 self.service.assert_called_with('enable', 'odl-controller')
106
71 def test_ovsdb_manager_joined_hook(self):107 def test_ovsdb_manager_joined_hook(self):
72 self._call_hook('ovsdb-manager-relation-joined')108 self._call_hook('ovsdb-manager-relation-joined')
73 self.relation_set.assert_called_with(port=6640, protocol="tcp")109 self.relation_set.assert_called_with(port=6640, protocol="tcp")
74110
75 def test_controller_api_relation_joined_hook(self):111 def test_controller_api_relation_joined_hook(self):
112 self.admin_user_password.return_value = 'admin'
76 self._call_hook('controller-api-relation-joined')113 self._call_hook('controller-api-relation-joined')
77 self.relation_set.assert_called_with(relation_id=None, port=8080,114 self.relation_set.assert_called_with(relation_id=None, port=8080,
78 username="admin",115 username="admin",
@@ -93,3 +130,4 @@
93 ],130 ],
94 'port': 8080131 'port': 8080
95 })132 })
133 self.assertTrue(self.configure_admin_user.called)
96134
=== modified file 'unit_tests/test_odl_controller_utils.py'
--- unit_tests/test_odl_controller_utils.py 2015-11-17 16:20:15 +0000
+++ unit_tests/test_odl_controller_utils.py 2016-02-22 12:50:59 +0000
@@ -1,4 +1,4 @@
1from mock import patch, call1from mock import patch, call, MagicMock
2from test_utils import CharmTestCase2from test_utils import CharmTestCase
33
4import odl_controller_utils as utils4import odl_controller_utils as utils
@@ -9,6 +9,9 @@
9 'render',9 'render',
10 'config',10 'config',
11 'retry_on_exception',11 'retry_on_exception',
12 'kv',
13 'ODLAccountHelper',
14 'pwgen',
12]15]
1316
1417
@@ -91,3 +94,94 @@
91 call(["feature:install", "odl-l2switch-all"]),94 call(["feature:install", "odl-l2switch-all"]),
92 call(['log:set', 'TRACE', 'cosc-cvpn-ovs-rest'])95 call(['log:set', 'TRACE', 'cosc-cvpn-ovs-rest'])
93 ])96 ])
97
98 def test_admin_user_password(self):
99 db = MagicMock()
100 self.kv.return_value = db
101 db.get.return_value = 'testpassword'
102 self.assertEqual(utils.admin_user_password(), 'testpassword')
103 db.get.assert_called_with('admin-password',
104 default=utils.DEFAULT_PASSWORD)
105
106 def test_configure_admin_user_autogen(self):
107 self.test_config.set('admin-password', None)
108 self.pwgen.return_value = 'newgeneratedpassword'
109 helper = MagicMock()
110 self.ODLAccountHelper.return_value = helper
111 db = MagicMock()
112 self.kv.return_value = db
113 db.get.return_value = utils.DEFAULT_PASSWORD
114 utils.configure_admin_user()
115 db.get.assert_called_with('admin-password',
116 default=utils.DEFAULT_PASSWORD)
117 db.set.assert_called_with('admin-password',
118 'autogen(newgeneratedpassword)')
119 helper.update_user.assert_called_with(
120 user='admin',
121 password='autogen(newgeneratedpassword)'
122 )
123 self.assertTrue(db.flush.called)
124
125 def test_configure_admin_user_unset(self):
126 self.test_config.set('admin-password', None)
127 self.pwgen.return_value = 'newgeneratedpassword'
128 helper = MagicMock()
129 self.ODLAccountHelper.return_value = helper
130 db = MagicMock()
131 self.kv.return_value = db
132 db.get.return_value = 'manualpassword'
133 utils.configure_admin_user()
134 db.get.assert_called_with('admin-password',
135 default=utils.DEFAULT_PASSWORD)
136 db.set.assert_called_with('admin-password',
137 'autogen(newgeneratedpassword)')
138 helper.update_user.assert_called_with(
139 user='admin',
140 password='autogen(newgeneratedpassword)'
141 )
142 self.assertTrue(db.flush.called)
143
144 def test_configure_admin_user_nochange_autogen(self):
145 self.test_config.set('admin-password', None)
146 helper = MagicMock()
147 self.ODLAccountHelper.return_value = helper
148 db = MagicMock()
149 self.kv.return_value = db
150 db.get.return_value = 'autogen(generatedpassword)'
151 utils.configure_admin_user()
152 db.get.assert_called_with('admin-password',
153 default=utils.DEFAULT_PASSWORD)
154 self.assertFalse(db.set.called)
155 self.assertFalse(helper.update_user.called)
156 self.assertFalse(db.flush.called)
157
158 def test_configure_admin_user_nochange_manual(self):
159 self.test_config.set('admin-password', 'manualpassword')
160 helper = MagicMock()
161 self.ODLAccountHelper.return_value = helper
162 db = MagicMock()
163 self.kv.return_value = db
164 db.get.return_value = 'manualpassword'
165 utils.configure_admin_user()
166 db.get.assert_called_with('admin-password',
167 default=utils.DEFAULT_PASSWORD)
168 self.assertFalse(db.set.called)
169 self.assertFalse(helper.update_user.called)
170 self.assertFalse(db.flush.called)
171
172 def test_configure_admin_user_config(self):
173 self.test_config.set('admin-password', 'manualpassword')
174 helper = MagicMock()
175 self.ODLAccountHelper.return_value = helper
176 db = MagicMock()
177 self.kv.return_value = db
178 db.get.return_value = utils.DEFAULT_PASSWORD
179 utils.configure_admin_user()
180 self.config.assert_called_with('admin-password')
181 db.get.assert_called_with('admin-password',
182 default=utils.DEFAULT_PASSWORD)
183 db.set.assert_called_with('admin-password',
184 'manualpassword')
185 helper.update_user.assert_called_with(user='admin',
186 password='manualpassword')
187 self.assertTrue(db.flush.called)

Subscribers

People subscribed via source and target branches

to all changes: