Merge lp:~1chb1n/charms/trusty/odl-controller/next-amulet-mitaka-1601 into lp:~openstack-charmers-archive/charms/trusty/odl-controller/next

Proposed by Ryan Beisner
Status: Rejected
Rejected by: Ryan Beisner
Proposed branch: lp:~1chb1n/charms/trusty/odl-controller/next-amulet-mitaka-1601
Merge into: lp:~openstack-charmers-archive/charms/trusty/odl-controller/next
Diff against target: 2070 lines (+889/-288)
20 files modified
hooks/charmhelpers/contrib/network/ip.py (+21/-19)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+9/-3)
hooks/charmhelpers/contrib/openstack/context.py (+32/-2)
hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh (+7/-5)
hooks/charmhelpers/contrib/openstack/neutron.py (+8/-8)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+19/-11)
hooks/charmhelpers/contrib/openstack/utils.py (+100/-54)
hooks/charmhelpers/contrib/python/packages.py (+13/-4)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+441/-59)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/core/hookenv.py (+41/-7)
hooks/charmhelpers/core/host.py (+103/-43)
hooks/charmhelpers/core/services/helpers.py (+11/-5)
hooks/charmhelpers/core/templating.py (+13/-7)
hooks/charmhelpers/fetch/__init__.py (+9/-1)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
tests/basic_deployment.py (+1/-1)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+8/-3)
To merge this branch: bzr merge lp:~1chb1n/charms/trusty/odl-controller/next-amulet-mitaka-1601
Reviewer Review Type Date Requested Status
James Page Needs Fixing
Review via email: mp+283530@code.launchpad.net

Description of the change

Enable mitaka amulet test; update test for neutron catalog names (lp1535410).

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17861 odl-controller-next for 1chb1n mp283530
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17861/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16692 odl-controller-next for 1chb1n mp283530
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16692/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8939 odl-controller-next for 1chb1n mp283530
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14592346/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8939/

19. By Ryan Beisner

sync charmhelpers for mitaka cloud archive recognition

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17930 odl-controller-next for 1chb1n mp283530
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17930/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16755 odl-controller-next for 1chb1n mp283530
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16755/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8955 odl-controller-next for 1chb1n mp283530
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14595570/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8955/

Revision history for this message
James Page (james-page) wrote :

Ryan

there is a bug in charm-helpers that I fixed this morning that causes odl-controller to fail to install - a resync on this MP should resolve.

review: Needs Fixing
20. By Ryan Beisner

resync charm helpers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #16764 odl-controller-next for 1chb1n mp283530
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/16764/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #17940 odl-controller-next for 1chb1n mp283530
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/17940/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #8965 odl-controller-next for 1chb1n mp283530
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/14598619/
Build: http://10.245.162.77:8080/job/charm_amulet_test/8965/

Unmerged revisions

20. By Ryan Beisner

resync charm helpers

19. By Ryan Beisner

sync charmhelpers for mitaka cloud archive recognition

18. By Ryan Beisner

enable mitaka amulet test

17. By Ryan Beisner

update test for neutron catalog names (lp1535410)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
--- hooks/charmhelpers/contrib/network/ip.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/network/ip.py 2016-01-22 14:18:52 +0000
@@ -53,7 +53,7 @@
5353
5454
55def no_ip_found_error_out(network):55def no_ip_found_error_out(network):
56 errmsg = ("No IP address found in network: %s" % network)56 errmsg = ("No IP address found in network(s): %s" % network)
57 raise ValueError(errmsg)57 raise ValueError(errmsg)
5858
5959
@@ -61,7 +61,7 @@
61 """Get an IPv4 or IPv6 address within the network from the host.61 """Get an IPv4 or IPv6 address within the network from the host.
6262
63 :param network (str): CIDR presentation format. For example,63 :param network (str): CIDR presentation format. For example,
64 '192.168.1.0/24'.64 '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
65 :param fallback (str): If no address is found, return fallback.65 :param fallback (str): If no address is found, return fallback.
66 :param fatal (boolean): If no address is found, fallback is not66 :param fatal (boolean): If no address is found, fallback is not
67 set and fatal is True then exit(1).67 set and fatal is True then exit(1).
@@ -75,24 +75,26 @@
75 else:75 else:
76 return None76 return None
7777
78 _validate_cidr(network)78 networks = network.split() or [network]
79 network = netaddr.IPNetwork(network)79 for network in networks:
80 for iface in netifaces.interfaces():80 _validate_cidr(network)
81 addresses = netifaces.ifaddresses(iface)81 network = netaddr.IPNetwork(network)
82 if network.version == 4 and netifaces.AF_INET in addresses:82 for iface in netifaces.interfaces():
83 addr = addresses[netifaces.AF_INET][0]['addr']83 addresses = netifaces.ifaddresses(iface)
84 netmask = addresses[netifaces.AF_INET][0]['netmask']84 if network.version == 4 and netifaces.AF_INET in addresses:
85 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))85 addr = addresses[netifaces.AF_INET][0]['addr']
86 if cidr in network:86 netmask = addresses[netifaces.AF_INET][0]['netmask']
87 return str(cidr.ip)87 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
88 if cidr in network:
89 return str(cidr.ip)
8890
89 if network.version == 6 and netifaces.AF_INET6 in addresses:91 if network.version == 6 and netifaces.AF_INET6 in addresses:
90 for addr in addresses[netifaces.AF_INET6]:92 for addr in addresses[netifaces.AF_INET6]:
91 if not addr['addr'].startswith('fe80'):93 if not addr['addr'].startswith('fe80'):
92 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],94 cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
93 addr['netmask']))95 addr['netmask']))
94 if cidr in network:96 if cidr in network:
95 return str(cidr.ip)97 return str(cidr.ip)
9698
97 if fallback is not None:99 if fallback is not None:
98 return fallback100 return fallback
99101
=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-22 14:18:52 +0000
@@ -124,7 +124,9 @@
124 'ceph-osd', 'ceph-radosgw']124 'ceph-osd', 'ceph-radosgw']
125125
126 # Charms which can not use openstack-origin, ie. many subordinates126 # Charms which can not use openstack-origin, ie. many subordinates
127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
129 'cinder-backup']
128130
129 if self.openstack:131 if self.openstack:
130 for svc in services:132 for svc in services:
@@ -224,7 +226,8 @@
224 self.precise_havana, self.precise_icehouse,226 self.precise_havana, self.precise_icehouse,
225 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,227 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
226 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,228 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
227 self.wily_liberty) = range(12)229 self.wily_liberty, self.trusty_mitaka,
230 self.xenial_mitaka) = range(14)
228231
229 releases = {232 releases = {
230 ('precise', None): self.precise_essex,233 ('precise', None): self.precise_essex,
@@ -236,9 +239,11 @@
236 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,239 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
237 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,240 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
238 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,241 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
242 ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
239 ('utopic', None): self.utopic_juno,243 ('utopic', None): self.utopic_juno,
240 ('vivid', None): self.vivid_kilo,244 ('vivid', None): self.vivid_kilo,
241 ('wily', None): self.wily_liberty}245 ('wily', None): self.wily_liberty,
246 ('xenial', None): self.xenial_mitaka}
242 return releases[(self.series, self.openstack)]247 return releases[(self.series, self.openstack)]
243248
244 def _get_openstack_release_string(self):249 def _get_openstack_release_string(self):
@@ -255,6 +260,7 @@
255 ('utopic', 'juno'),260 ('utopic', 'juno'),
256 ('vivid', 'kilo'),261 ('vivid', 'kilo'),
257 ('wily', 'liberty'),262 ('wily', 'liberty'),
263 ('xenial', 'mitaka'),
258 ])264 ])
259 if self.openstack:265 if self.openstack:
260 os_origin = self.openstack.split(':')[1]266 os_origin = self.openstack.split(':')[1]
261267
=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
--- hooks/charmhelpers/contrib/openstack/context.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/context.py 2016-01-22 14:18:52 +0000
@@ -57,6 +57,7 @@
57 get_nic_hwaddr,57 get_nic_hwaddr,
58 mkdir,58 mkdir,
59 write_file,59 write_file,
60 pwgen,
60)61)
61from charmhelpers.contrib.hahelpers.cluster import (62from charmhelpers.contrib.hahelpers.cluster import (
62 determine_apache_port,63 determine_apache_port,
@@ -87,6 +88,8 @@
87 is_bridge_member,88 is_bridge_member,
88)89)
89from charmhelpers.contrib.openstack.utils import get_host_ip90from charmhelpers.contrib.openstack.utils import get_host_ip
91from charmhelpers.core.unitdata import kv
92
90CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'93CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
91ADDRESS_TYPES = ['admin', 'internal', 'public']94ADDRESS_TYPES = ['admin', 'internal', 'public']
9295
@@ -626,15 +629,28 @@
626 if config('haproxy-client-timeout'):629 if config('haproxy-client-timeout'):
627 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')630 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
628631
632 if config('haproxy-queue-timeout'):
633 ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
634
635 if config('haproxy-connect-timeout'):
636 ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
637
629 if config('prefer-ipv6'):638 if config('prefer-ipv6'):
630 ctxt['ipv6'] = True639 ctxt['ipv6'] = True
631 ctxt['local_host'] = 'ip6-localhost'640 ctxt['local_host'] = 'ip6-localhost'
632 ctxt['haproxy_host'] = '::'641 ctxt['haproxy_host'] = '::'
633 ctxt['stat_port'] = ':::8888'
634 else:642 else:
635 ctxt['local_host'] = '127.0.0.1'643 ctxt['local_host'] = '127.0.0.1'
636 ctxt['haproxy_host'] = '0.0.0.0'644 ctxt['haproxy_host'] = '0.0.0.0'
637 ctxt['stat_port'] = ':8888'645
646 ctxt['stat_port'] = '8888'
647
648 db = kv()
649 ctxt['stat_password'] = db.get('stat-password')
650 if not ctxt['stat_password']:
651 ctxt['stat_password'] = db.set('stat-password',
652 pwgen(32))
653 db.flush()
638654
639 for frontend in cluster_hosts:655 for frontend in cluster_hosts:
640 if (len(cluster_hosts[frontend]['backends']) > 1 or656 if (len(cluster_hosts[frontend]['backends']) > 1 or
@@ -1088,6 +1104,20 @@
1088 config_flags_parser(config_flags)}1104 config_flags_parser(config_flags)}
10891105
10901106
1107class LibvirtConfigFlagsContext(OSContextGenerator):
1108 """
1109 This context provides support for extending
1110 the libvirt section through user-defined flags.
1111 """
1112 def __call__(self):
1113 ctxt = {}
1114 libvirt_flags = config('libvirt-flags')
1115 if libvirt_flags:
1116 ctxt['libvirt_flags'] = config_flags_parser(
1117 libvirt_flags)
1118 return ctxt
1119
1120
1091class SubordinateConfigContext(OSContextGenerator):1121class SubordinateConfigContext(OSContextGenerator):
10921122
1093 """1123 """
10941124
=== modified file 'hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh'
--- hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh 2016-01-22 14:18:52 +0000
@@ -9,15 +9,17 @@
9CRITICAL=09CRITICAL=0
10NOTACTIVE=''10NOTACTIVE=''
11LOGFILE=/var/log/nagios/check_haproxy.log11LOGFILE=/var/log/nagios/check_haproxy.log
12AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')12AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
1313
14for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});14typeset -i N_INSTANCES=0
15for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
15do16do
16 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')17 N_INSTANCES=N_INSTANCES+1
18 output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
17 if [ $? != 0 ]; then19 if [ $? != 0 ]; then
18 date >> $LOGFILE20 date >> $LOGFILE
19 echo $output >> $LOGFILE21 echo $output >> $LOGFILE
20 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&122 /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
21 CRITICAL=123 CRITICAL=1
22 NOTACTIVE="${NOTACTIVE} $appserver"24 NOTACTIVE="${NOTACTIVE} $appserver"
23 fi25 fi
@@ -28,5 +30,5 @@
28 exit 230 exit 2
29fi31fi
3032
31echo "OK: All haproxy instances looking good"33echo "OK: All haproxy instances ($N_INSTANCES) looking good"
32exit 034exit 0
3335
=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-01-22 14:18:52 +0000
@@ -50,7 +50,7 @@
50 if kernel_version() >= (3, 13):50 if kernel_version() >= (3, 13):
51 return []51 return []
52 else:52 else:
53 return ['openvswitch-datapath-dkms']53 return [headers_package(), 'openvswitch-datapath-dkms']
5454
5555
56# legacy56# legacy
@@ -70,7 +70,7 @@
70 relation_prefix='neutron',70 relation_prefix='neutron',
71 ssl_dir=QUANTUM_CONF_DIR)],71 ssl_dir=QUANTUM_CONF_DIR)],
72 'services': ['quantum-plugin-openvswitch-agent'],72 'services': ['quantum-plugin-openvswitch-agent'],
73 'packages': [[headers_package()] + determine_dkms_package(),73 'packages': [determine_dkms_package(),
74 ['quantum-plugin-openvswitch-agent']],74 ['quantum-plugin-openvswitch-agent']],
75 'server_packages': ['quantum-server',75 'server_packages': ['quantum-server',
76 'quantum-plugin-openvswitch'],76 'quantum-plugin-openvswitch'],
@@ -111,7 +111,7 @@
111 relation_prefix='neutron',111 relation_prefix='neutron',
112 ssl_dir=NEUTRON_CONF_DIR)],112 ssl_dir=NEUTRON_CONF_DIR)],
113 'services': ['neutron-plugin-openvswitch-agent'],113 'services': ['neutron-plugin-openvswitch-agent'],
114 'packages': [[headers_package()] + determine_dkms_package(),114 'packages': [determine_dkms_package(),
115 ['neutron-plugin-openvswitch-agent']],115 ['neutron-plugin-openvswitch-agent']],
116 'server_packages': ['neutron-server',116 'server_packages': ['neutron-server',
117 'neutron-plugin-openvswitch'],117 'neutron-plugin-openvswitch'],
@@ -155,7 +155,7 @@
155 relation_prefix='neutron',155 relation_prefix='neutron',
156 ssl_dir=NEUTRON_CONF_DIR)],156 ssl_dir=NEUTRON_CONF_DIR)],
157 'services': [],157 'services': [],
158 'packages': [[headers_package()] + determine_dkms_package(),158 'packages': [determine_dkms_package(),
159 ['neutron-plugin-cisco']],159 ['neutron-plugin-cisco']],
160 'server_packages': ['neutron-server',160 'server_packages': ['neutron-server',
161 'neutron-plugin-cisco'],161 'neutron-plugin-cisco'],
@@ -174,7 +174,7 @@
174 'neutron-dhcp-agent',174 'neutron-dhcp-agent',
175 'nova-api-metadata',175 'nova-api-metadata',
176 'etcd'],176 'etcd'],
177 'packages': [[headers_package()] + determine_dkms_package(),177 'packages': [determine_dkms_package(),
178 ['calico-compute',178 ['calico-compute',
179 'bird',179 'bird',
180 'neutron-dhcp-agent',180 'neutron-dhcp-agent',
@@ -204,8 +204,8 @@
204 database=config('database'),204 database=config('database'),
205 ssl_dir=NEUTRON_CONF_DIR)],205 ssl_dir=NEUTRON_CONF_DIR)],
206 'services': [],206 'services': [],
207 'packages': [['plumgrid-lxc'],207 'packages': ['plumgrid-lxc',
208 ['iovisor-dkms']],208 'iovisor-dkms'],
209 'server_packages': ['neutron-server',209 'server_packages': ['neutron-server',
210 'neutron-plugin-plumgrid'],210 'neutron-plugin-plumgrid'],
211 'server_services': ['neutron-server']211 'server_services': ['neutron-server']
@@ -219,7 +219,7 @@
219 relation_prefix='neutron',219 relation_prefix='neutron',
220 ssl_dir=NEUTRON_CONF_DIR)],220 ssl_dir=NEUTRON_CONF_DIR)],
221 'services': [],221 'services': [],
222 'packages': [[headers_package()] + determine_dkms_package()],222 'packages': [determine_dkms_package()],
223 'server_packages': ['neutron-server',223 'server_packages': ['neutron-server',
224 'python-neutron-plugin-midonet'],224 'python-neutron-plugin-midonet'],
225 'server_services': ['neutron-server']225 'server_services': ['neutron-server']
226226
=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2016-01-22 14:18:52 +0000
@@ -12,27 +12,35 @@
12 option tcplog12 option tcplog
13 option dontlognull13 option dontlognull
14 retries 314 retries 3
15 timeout queue 100015{%- if haproxy_queue_timeout %}
16 timeout connect 100016 timeout queue {{ haproxy_queue_timeout }}
17{% if haproxy_client_timeout -%}17{%- else %}
18 timeout queue 5000
19{%- endif %}
20{%- if haproxy_connect_timeout %}
21 timeout connect {{ haproxy_connect_timeout }}
22{%- else %}
23 timeout connect 5000
24{%- endif %}
25{%- if haproxy_client_timeout %}
18 timeout client {{ haproxy_client_timeout }}26 timeout client {{ haproxy_client_timeout }}
19{% else -%}27{%- else %}
20 timeout client 3000028 timeout client 30000
21{% endif -%}29{%- endif %}
2230{%- if haproxy_server_timeout %}
23{% if haproxy_server_timeout -%}
24 timeout server {{ haproxy_server_timeout }}31 timeout server {{ haproxy_server_timeout }}
25{% else -%}32{%- else %}
26 timeout server 3000033 timeout server 30000
27{% endif -%}34{%- endif %}
2835
29listen stats {{ stat_port }}36listen stats
37 bind {{ local_host }}:{{ stat_port }}
30 mode http38 mode http
31 stats enable39 stats enable
32 stats hide-version40 stats hide-version
33 stats realm Haproxy\ Statistics41 stats realm Haproxy\ Statistics
34 stats uri /42 stats uri /
35 stats auth admin:password43 stats auth admin:{{ stat_password }}
3644
37{% if frontends -%}45{% if frontends -%}
38{% for service, ports in service_ports.items() -%}46{% for service, ports in service_ports.items() -%}
3947
=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
--- hooks/charmhelpers/contrib/openstack/utils.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-01-22 14:18:52 +0000
@@ -86,6 +86,7 @@
86 ('utopic', 'juno'),86 ('utopic', 'juno'),
87 ('vivid', 'kilo'),87 ('vivid', 'kilo'),
88 ('wily', 'liberty'),88 ('wily', 'liberty'),
89 ('xenial', 'mitaka'),
89])90])
9091
9192
@@ -99,61 +100,70 @@
99 ('2014.2', 'juno'),100 ('2014.2', 'juno'),
100 ('2015.1', 'kilo'),101 ('2015.1', 'kilo'),
101 ('2015.2', 'liberty'),102 ('2015.2', 'liberty'),
103 ('2016.1', 'mitaka'),
102])104])
103105
104# The ugly duckling106# The ugly duckling - must list releases oldest to newest
105SWIFT_CODENAMES = OrderedDict([107SWIFT_CODENAMES = OrderedDict([
106 ('1.4.3', 'diablo'),108 ('diablo',
107 ('1.4.8', 'essex'),109 ['1.4.3']),
108 ('1.7.4', 'folsom'),110 ('essex',
109 ('1.8.0', 'grizzly'),111 ['1.4.8']),
110 ('1.7.7', 'grizzly'),112 ('folsom',
111 ('1.7.6', 'grizzly'),113 ['1.7.4']),
112 ('1.10.0', 'havana'),114 ('grizzly',
113 ('1.9.1', 'havana'),115 ['1.7.6', '1.7.7', '1.8.0']),
114 ('1.9.0', 'havana'),116 ('havana',
115 ('1.13.1', 'icehouse'),117 ['1.9.0', '1.9.1', '1.10.0']),
116 ('1.13.0', 'icehouse'),118 ('icehouse',
117 ('1.12.0', 'icehouse'),119 ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
118 ('1.11.0', 'icehouse'),120 ('juno',
119 ('2.0.0', 'juno'),121 ['2.0.0', '2.1.0', '2.2.0']),
120 ('2.1.0', 'juno'),122 ('kilo',
121 ('2.2.0', 'juno'),123 ['2.2.1', '2.2.2']),
122 ('2.2.1', 'kilo'),124 ('liberty',
123 ('2.2.2', 'kilo'),125 ['2.3.0', '2.4.0', '2.5.0']),
124 ('2.3.0', 'liberty'),126 ('mitaka',
125 ('2.4.0', 'liberty'),127 ['2.5.0']),
126 ('2.5.0', 'liberty'),
127])128])
128129
129# >= Liberty version->codename mapping130# >= Liberty version->codename mapping
130PACKAGE_CODENAMES = {131PACKAGE_CODENAMES = {
131 'nova-common': OrderedDict([132 'nova-common': OrderedDict([
132 ('12.0.0', 'liberty'),133 ('12.0', 'liberty'),
134 ('13.0', 'mitaka'),
133 ]),135 ]),
134 'neutron-common': OrderedDict([136 'neutron-common': OrderedDict([
135 ('7.0.0', 'liberty'),137 ('7.0', 'liberty'),
138 ('8.0', 'mitaka'),
136 ]),139 ]),
137 'cinder-common': OrderedDict([140 'cinder-common': OrderedDict([
138 ('7.0.0', 'liberty'),141 ('7.0', 'liberty'),
142 ('8.0', 'mitaka'),
139 ]),143 ]),
140 'keystone': OrderedDict([144 'keystone': OrderedDict([
141 ('8.0.0', 'liberty'),145 ('8.0', 'liberty'),
146 ('9.0', 'mitaka'),
142 ]),147 ]),
143 'horizon-common': OrderedDict([148 'horizon-common': OrderedDict([
144 ('8.0.0', 'liberty'),149 ('8.0', 'liberty'),
150 ('9.0', 'mitaka'),
145 ]),151 ]),
146 'ceilometer-common': OrderedDict([152 'ceilometer-common': OrderedDict([
147 ('5.0.0', 'liberty'),153 ('5.0', 'liberty'),
154 ('6.0', 'mitaka'),
148 ]),155 ]),
149 'heat-common': OrderedDict([156 'heat-common': OrderedDict([
150 ('5.0.0', 'liberty'),157 ('5.0', 'liberty'),
158 ('6.0', 'mitaka'),
151 ]),159 ]),
152 'glance-common': OrderedDict([160 'glance-common': OrderedDict([
153 ('11.0.0', 'liberty'),161 ('11.0', 'liberty'),
162 ('12.0', 'mitaka'),
154 ]),163 ]),
155 'openstack-dashboard': OrderedDict([164 'openstack-dashboard': OrderedDict([
156 ('8.0.0', 'liberty'),165 ('8.0', 'liberty'),
166 ('9.0', 'mitaka'),
157 ]),167 ]),
158}168}
159169
@@ -216,6 +226,33 @@
216 error_out(e)226 error_out(e)
217227
218228
229def get_os_version_codename_swift(codename):
230 '''Determine OpenStack version number of swift from codename.'''
231 for k, v in six.iteritems(SWIFT_CODENAMES):
232 if k == codename:
233 return v[-1]
234 e = 'Could not derive swift version for '\
235 'codename: %s' % codename
236 error_out(e)
237
238
239def get_swift_codename(version):
240 '''Determine OpenStack codename that corresponds to swift version.'''
241 codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
242 if len(codenames) > 1:
243 # If more than one release codename contains this version we determine
244 # the actual codename based on the highest available install source.
245 for codename in reversed(codenames):
246 releases = UBUNTU_OPENSTACK_RELEASE
247 release = [k for k, v in six.iteritems(releases) if codename in v]
248 ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
249 if codename in ret or release[0] in ret:
250 return codename
251 elif len(codenames) == 1:
252 return codenames[0]
253 return None
254
255
219def get_os_codename_package(package, fatal=True):256def get_os_codename_package(package, fatal=True):
220 '''Derive OpenStack release codename from an installed package.'''257 '''Derive OpenStack release codename from an installed package.'''
221 import apt_pkg as apt258 import apt_pkg as apt
@@ -240,7 +277,14 @@
240 error_out(e)277 error_out(e)
241278
242 vers = apt.upstream_version(pkg.current_ver.ver_str)279 vers = apt.upstream_version(pkg.current_ver.ver_str)
243 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)280 if 'swift' in pkg.name:
281 # Fully x.y.z match for swift versions
282 match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
283 else:
284 # x.y match only for 20XX.X
285 # and ignore patch level for other packages
286 match = re.match('^(\d+)\.(\d+)', vers)
287
244 if match:288 if match:
245 vers = match.group(0)289 vers = match.group(0)
246290
@@ -252,13 +296,8 @@
252 # < Liberty co-ordinated project versions296 # < Liberty co-ordinated project versions
253 try:297 try:
254 if 'swift' in pkg.name:298 if 'swift' in pkg.name:
255 swift_vers = vers[:5]299 return get_swift_codename(vers)
256 if swift_vers not in SWIFT_CODENAMES:
257 # Deal with 1.10.0 upward
258 swift_vers = vers[:6]
259 return SWIFT_CODENAMES[swift_vers]
260 else:300 else:
261 vers = vers[:6]
262 return OPENSTACK_CODENAMES[vers]301 return OPENSTACK_CODENAMES[vers]
263 except KeyError:302 except KeyError:
264 if not fatal:303 if not fatal:
@@ -276,12 +315,14 @@
276315
277 if 'swift' in pkg:316 if 'swift' in pkg:
278 vers_map = SWIFT_CODENAMES317 vers_map = SWIFT_CODENAMES
318 for cname, version in six.iteritems(vers_map):
319 if cname == codename:
320 return version[-1]
279 else:321 else:
280 vers_map = OPENSTACK_CODENAMES322 vers_map = OPENSTACK_CODENAMES
281323 for version, cname in six.iteritems(vers_map):
282 for version, cname in six.iteritems(vers_map):324 if cname == codename:
283 if cname == codename:325 return version
284 return version
285 # e = "Could not determine OpenStack version for package: %s" % pkg326 # e = "Could not determine OpenStack version for package: %s" % pkg
286 # error_out(e)327 # error_out(e)
287328
@@ -377,6 +418,9 @@
377 'liberty': 'trusty-updates/liberty',418 'liberty': 'trusty-updates/liberty',
378 'liberty/updates': 'trusty-updates/liberty',419 'liberty/updates': 'trusty-updates/liberty',
379 'liberty/proposed': 'trusty-proposed/liberty',420 'liberty/proposed': 'trusty-proposed/liberty',
421 'mitaka': 'trusty-updates/mitaka',
422 'mitaka/updates': 'trusty-updates/mitaka',
423 'mitaka/proposed': 'trusty-proposed/mitaka',
380 }424 }
381425
382 try:426 try:
@@ -444,11 +488,16 @@
444 cur_vers = get_os_version_package(package)488 cur_vers = get_os_version_package(package)
445 if "swift" in package:489 if "swift" in package:
446 codename = get_os_codename_install_source(src)490 codename = get_os_codename_install_source(src)
447 available_vers = get_os_version_codename(codename, SWIFT_CODENAMES)491 avail_vers = get_os_version_codename_swift(codename)
448 else:492 else:
449 available_vers = get_os_version_install_source(src)493 avail_vers = get_os_version_install_source(src)
450 apt.init()494 apt.init()
451 return apt.version_compare(available_vers, cur_vers) == 1495 if "swift" in package:
496 major_cur_vers = cur_vers.split('.', 1)[0]
497 major_avail_vers = avail_vers.split('.', 1)[0]
498 major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
499 return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
500 return apt.version_compare(avail_vers, cur_vers) == 1
452501
453502
454def ensure_block_device(block_device):503def ensure_block_device(block_device):
@@ -577,7 +626,7 @@
577 return yaml.load(projects_yaml)626 return yaml.load(projects_yaml)
578627
579628
580def git_clone_and_install(projects_yaml, core_project, depth=1):629def git_clone_and_install(projects_yaml, core_project):
581 """630 """
582 Clone/install all specified OpenStack repositories.631 Clone/install all specified OpenStack repositories.
583632
@@ -627,6 +676,9 @@
627 for p in projects['repositories']:676 for p in projects['repositories']:
628 repo = p['repository']677 repo = p['repository']
629 branch = p['branch']678 branch = p['branch']
679 depth = '1'
680 if 'depth' in p.keys():
681 depth = p['depth']
630 if p['name'] == 'requirements':682 if p['name'] == 'requirements':
631 repo_dir = _git_clone_and_install_single(repo, branch, depth,683 repo_dir = _git_clone_and_install_single(repo, branch, depth,
632 parent_dir, http_proxy,684 parent_dir, http_proxy,
@@ -671,19 +723,13 @@
671 """723 """
672 Clone and install a single git repository.724 Clone and install a single git repository.
673 """725 """
674 dest_dir = os.path.join(parent_dir, os.path.basename(repo))
675
676 if not os.path.exists(parent_dir):726 if not os.path.exists(parent_dir):
677 juju_log('Directory already exists at {}. '727 juju_log('Directory already exists at {}. '
678 'No need to create directory.'.format(parent_dir))728 'No need to create directory.'.format(parent_dir))
679 os.mkdir(parent_dir)729 os.mkdir(parent_dir)
680730
681 if not os.path.exists(dest_dir):731 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
682 juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))732 repo_dir = install_remote(repo, dest=parent_dir, branch=branch, depth=depth)
683 repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
684 depth=depth)
685 else:
686 repo_dir = dest_dir
687733
688 venv = os.path.join(parent_dir, 'venv')734 venv = os.path.join(parent_dir, 'venv')
689735
690736
=== modified file 'hooks/charmhelpers/contrib/python/packages.py'
--- hooks/charmhelpers/contrib/python/packages.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/python/packages.py 2016-01-22 14:18:52 +0000
@@ -42,8 +42,12 @@
42 yield "--{0}={1}".format(key, value)42 yield "--{0}={1}".format(key, value)
4343
4444
45def pip_install_requirements(requirements, **options):45def pip_install_requirements(requirements, constraints=None, **options):
46 """Install a requirements file """46 """Install a requirements file.
47
48 :param constraints: Path to pip constraints file.
49 http://pip.readthedocs.org/en/stable/user_guide/#constraints-files
50 """
47 command = ["install"]51 command = ["install"]
4852
49 available_options = ('proxy', 'src', 'log', )53 available_options = ('proxy', 'src', 'log', )
@@ -51,8 +55,13 @@
51 command.append(option)55 command.append(option)
5256
53 command.append("-r {0}".format(requirements))57 command.append("-r {0}".format(requirements))
54 log("Installing from file: {} with options: {}".format(requirements,58 if constraints:
55 command))59 command.append("-c {0}".format(constraints))
60 log("Installing from file: {} with constraints {} "
61 "and options: {}".format(requirements, constraints, command))
62 else:
63 log("Installing from file: {} with options: {}".format(requirements,
64 command))
56 pip_execute(command)65 pip_execute(command)
5766
5867
5968
=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2016-01-22 14:18:52 +0000
@@ -23,6 +23,8 @@
23# James Page <james.page@ubuntu.com>23# James Page <james.page@ubuntu.com>
24# Adam Gandelman <adamg@ubuntu.com>24# Adam Gandelman <adamg@ubuntu.com>
25#25#
26import bisect
27import six
2628
27import os29import os
28import shutil30import shutil
@@ -72,6 +74,394 @@
72err to syslog = {use_syslog}74err to syslog = {use_syslog}
73clog to syslog = {use_syslog}75clog to syslog = {use_syslog}
74"""76"""
77# For 50 < osds < 240,000 OSDs (Roughly 1 Exabyte at 6T OSDs)
78powers_of_two = [8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608]
79
80
81def validator(value, valid_type, valid_range=None):
82 """
83 Used to validate these: http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
84 Example input:
85 validator(value=1,
86 valid_type=int,
87 valid_range=[0, 2])
88 This says I'm testing value=1. It must be an int inclusive in [0,2]
89
90 :param value: The value to validate
91 :param valid_type: The type that value should be.
92 :param valid_range: A range of values that value can assume.
93 :return:
94 """
95 assert isinstance(value, valid_type), "{} is not a {}".format(
96 value,
97 valid_type)
98 if valid_range is not None:
99 assert isinstance(valid_range, list), \
100 "valid_range must be a list, was given {}".format(valid_range)
101 # If we're dealing with strings
102 if valid_type is six.string_types:
103 assert value in valid_range, \
104 "{} is not in the list {}".format(value, valid_range)
105 # Integer, float should have a min and max
106 else:
107 if len(valid_range) != 2:
108 raise ValueError(
109 "Invalid valid_range list of {} for {}. "
110 "List must be [min,max]".format(valid_range, value))
111 assert value >= valid_range[0], \
112 "{} is less than minimum allowed value of {}".format(
113 value, valid_range[0])
114 assert value <= valid_range[1], \
115 "{} is greater than maximum allowed value of {}".format(
116 value, valid_range[1])
117
118
119class PoolCreationError(Exception):
120 """
121 A custom error to inform the caller that a pool creation failed. Provides an error message
122 """
123 def __init__(self, message):
124 super(PoolCreationError, self).__init__(message)
125
126
127class Pool(object):
128 """
129 An object oriented approach to Ceph pool creation. This base class is inherited by ReplicatedPool and ErasurePool.
130 Do not call create() on this base class as it will not do anything. Instantiate a child class and call create().
131 """
132 def __init__(self, service, name):
133 self.service = service
134 self.name = name
135
136 # Create the pool if it doesn't exist already
137 # To be implemented by subclasses
138 def create(self):
139 pass
140
141 def add_cache_tier(self, cache_pool, mode):
142 """
143 Adds a new cache tier to an existing pool.
144 :param cache_pool: six.string_types. The cache tier pool name to add.
145 :param mode: six.string_types. The caching mode to use for this pool. valid range = ["readonly", "writeback"]
146 :return: None
147 """
148 # Check the input types and values
149 validator(value=cache_pool, valid_type=six.string_types)
150 validator(value=mode, valid_type=six.string_types, valid_range=["readonly", "writeback"])
151
152 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'add', self.name, cache_pool])
153 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, mode])
154 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'set-overlay', self.name, cache_pool])
155 check_call(['ceph', '--id', self.service, 'osd', 'pool', 'set', cache_pool, 'hit_set_type', 'bloom'])
156
157 def remove_cache_tier(self, cache_pool):
158 """
159 Removes a cache tier from Ceph. Flushes all dirty objects from writeback pools and waits for that to complete.
160 :param cache_pool: six.string_types. The cache tier pool name to remove.
161 :return: None
162 """
163 # read-only is easy, writeback is much harder
164 mode = get_cache_mode(cache_pool)
165 if mode == 'readonly':
166 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'none'])
167 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
168
169 elif mode == 'writeback':
170 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'cache-mode', cache_pool, 'forward'])
171 # Flush the cache and wait for it to return
172 check_call(['ceph', '--id', self.service, '-p', cache_pool, 'cache-flush-evict-all'])
173 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove-overlay', self.name])
174 check_call(['ceph', '--id', self.service, 'osd', 'tier', 'remove', self.name, cache_pool])
175
176 def get_pgs(self, pool_size):
177 """
178 :param pool_size: int. pool_size is either the number of replicas for replicated pools or the K+M sum for
179 erasure coded pools
180 :return: int. The number of pgs to use.
181 """
182 validator(value=pool_size, valid_type=int)
183 osds = get_osds(self.service)
184 if not osds:
185 # NOTE(james-page): Default to 200 for older ceph versions
186 # which don't support OSD query from cli
187 return 200
188
189 # Calculate based on Ceph best practices
190 if osds < 5:
191 return 128
192 elif 5 < osds < 10:
193 return 512
194 elif 10 < osds < 50:
195 return 4096
196 else:
197 estimate = (osds * 100) / pool_size
198 # Return the next nearest power of 2
199 index = bisect.bisect_right(powers_of_two, estimate)
200 return powers_of_two[index]
201
202
203class ReplicatedPool(Pool):
204 def __init__(self, service, name, replicas=2):
205 super(ReplicatedPool, self).__init__(service=service, name=name)
206 self.replicas = replicas
207
208 def create(self):
209 if not pool_exists(self.service, self.name):
210 # Create it
211 pgs = self.get_pgs(self.replicas)
212 cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs)]
213 try:
214 check_call(cmd)
215 except CalledProcessError:
216 raise
217
218
219# Default jerasure erasure coded pool
220class ErasurePool(Pool):
221 def __init__(self, service, name, erasure_code_profile="default"):
222 super(ErasurePool, self).__init__(service=service, name=name)
223 self.erasure_code_profile = erasure_code_profile
224
225 def create(self):
226 if not pool_exists(self.service, self.name):
227 # Try to find the erasure profile information so we can properly size the pgs
228 erasure_profile = get_erasure_profile(service=self.service, name=self.erasure_code_profile)
229
230 # Check for errors
231 if erasure_profile is None:
232 log(message='Failed to discover erasure_profile named={}'.format(self.erasure_code_profile),
233 level=ERROR)
234 raise PoolCreationError(message='unable to find erasure profile {}'.format(self.erasure_code_profile))
235 if 'k' not in erasure_profile or 'm' not in erasure_profile:
236 # Error
237 log(message='Unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile),
238 level=ERROR)
239 raise PoolCreationError(
240 message='unable to find k (data chunks) or m (coding chunks) in {}'.format(erasure_profile))
241
242 pgs = self.get_pgs(int(erasure_profile['k']) + int(erasure_profile['m']))
243 # Create it
244 cmd = ['ceph', '--id', self.service, 'osd', 'pool', 'create', self.name, str(pgs),
245 'erasure', self.erasure_code_profile]
246 try:
247 check_call(cmd)
248 except CalledProcessError:
249 raise
250
251 """Get an existing erasure code profile if it already exists.
252 Returns json formatted output"""
253
254
255def get_erasure_profile(service, name):
256 """
257 :param service: six.string_types. The Ceph user name to run the command under
258 :param name:
259 :return:
260 """
261 try:
262 out = check_output(['ceph', '--id', service,
263 'osd', 'erasure-code-profile', 'get',
264 name, '--format=json'])
265 return json.loads(out)
266 except (CalledProcessError, OSError, ValueError):
267 return None
268
269
270def pool_set(service, pool_name, key, value):
271 """
272 Sets a value for a RADOS pool in ceph.
273 :param service: six.string_types. The Ceph user name to run the command under
274 :param pool_name: six.string_types
275 :param key: six.string_types
276 :param value:
277 :return: None. Can raise CalledProcessError
278 """
279 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', pool_name, key, value]
280 try:
281 check_call(cmd)
282 except CalledProcessError:
283 raise
284
285
286def snapshot_pool(service, pool_name, snapshot_name):
287 """
288 Snapshots a RADOS pool in ceph.
289 :param service: six.string_types. The Ceph user name to run the command under
290 :param pool_name: six.string_types
291 :param snapshot_name: six.string_types
292 :return: None. Can raise CalledProcessError
293 """
294 cmd = ['ceph', '--id', service, 'osd', 'pool', 'mksnap', pool_name, snapshot_name]
295 try:
296 check_call(cmd)
297 except CalledProcessError:
298 raise
299
300
301def remove_pool_snapshot(service, pool_name, snapshot_name):
302 """
303 Remove a snapshot from a RADOS pool in ceph.
304 :param service: six.string_types. The Ceph user name to run the command under
305 :param pool_name: six.string_types
306 :param snapshot_name: six.string_types
307 :return: None. Can raise CalledProcessError
308 """
309 cmd = ['ceph', '--id', service, 'osd', 'pool', 'rmsnap', pool_name, snapshot_name]
310 try:
311 check_call(cmd)
312 except CalledProcessError:
313 raise
314
315
316# max_bytes should be an int or long
317def set_pool_quota(service, pool_name, max_bytes):
318 """
319 :param service: six.string_types. The Ceph user name to run the command under
320 :param pool_name: six.string_types
321 :param max_bytes: int or long
322 :return: None. Can raise CalledProcessError
323 """
324 # Set a byte quota on a RADOS pool in ceph.
325 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', max_bytes]
326 try:
327 check_call(cmd)
328 except CalledProcessError:
329 raise
330
331
332def remove_pool_quota(service, pool_name):
333 """
334 Set a byte quota on a RADOS pool in ceph.
335 :param service: six.string_types. The Ceph user name to run the command under
336 :param pool_name: six.string_types
337 :return: None. Can raise CalledProcessError
338 """
339 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set-quota', pool_name, 'max_bytes', '0']
340 try:
341 check_call(cmd)
342 except CalledProcessError:
343 raise
344
345
346def create_erasure_profile(service, profile_name, erasure_plugin_name='jerasure', failure_domain='host',
347 data_chunks=2, coding_chunks=1,
348 locality=None, durability_estimator=None):
349 """
350 Create a new erasure code profile if one does not already exist for it. Updates
351 the profile if it exists. Please see http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
352 for more details
353 :param service: six.string_types. The Ceph user name to run the command under
354 :param profile_name: six.string_types
355 :param erasure_plugin_name: six.string_types
356 :param failure_domain: six.string_types. One of ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region',
357 'room', 'root', 'row'])
358 :param data_chunks: int
359 :param coding_chunks: int
360 :param locality: int
361 :param durability_estimator: int
362 :return: None. Can raise CalledProcessError
363 """
364 # Ensure this failure_domain is allowed by Ceph
365 validator(failure_domain, six.string_types,
366 ['chassis', 'datacenter', 'host', 'osd', 'pdu', 'pod', 'rack', 'region', 'room', 'root', 'row'])
367
368 cmd = ['ceph', '--id', service, 'osd', 'erasure-code-profile', 'set', profile_name,
369 'plugin=' + erasure_plugin_name, 'k=' + str(data_chunks), 'm=' + str(coding_chunks),
370 'ruleset_failure_domain=' + failure_domain]
371 if locality is not None and durability_estimator is not None:
372 raise ValueError("create_erasure_profile should be called with k, m and one of l or c but not both.")
373
374 # Add plugin specific information
375 if locality is not None:
376 # For local erasure codes
377 cmd.append('l=' + str(locality))
378 if durability_estimator is not None:
379 # For Shec erasure codes
380 cmd.append('c=' + str(durability_estimator))
381
382 if erasure_profile_exists(service, profile_name):
383 cmd.append('--force')
384
385 try:
386 check_call(cmd)
387 except CalledProcessError:
388 raise
389
390
391def rename_pool(service, old_name, new_name):
392 """
393 Rename a Ceph pool from old_name to new_name
394 :param service: six.string_types. The Ceph user name to run the command under
395 :param old_name: six.string_types
396 :param new_name: six.string_types
397 :return: None
398 """
399 validator(value=old_name, valid_type=six.string_types)
400 validator(value=new_name, valid_type=six.string_types)
401
402 cmd = ['ceph', '--id', service, 'osd', 'pool', 'rename', old_name, new_name]
403 check_call(cmd)
404
405
406def erasure_profile_exists(service, name):
407 """
408 Check to see if an Erasure code profile already exists.
409 :param service: six.string_types. The Ceph user name to run the command under
410 :param name: six.string_types
411 :return: int or None
412 """
413 validator(value=name, valid_type=six.string_types)
414 try:
415 check_call(['ceph', '--id', service,
416 'osd', 'erasure-code-profile', 'get',
417 name])
418 return True
419 except CalledProcessError:
420 return False
421
422
423def get_cache_mode(service, pool_name):
424 """
425 Find the current caching mode of the pool_name given.
426 :param service: six.string_types. The Ceph user name to run the command under
427 :param pool_name: six.string_types
428 :return: int or None
429 """
430 validator(value=service, valid_type=six.string_types)
431 validator(value=pool_name, valid_type=six.string_types)
432 out = check_output(['ceph', '--id', service, 'osd', 'dump', '--format=json'])
433 try:
434 osd_json = json.loads(out)
435 for pool in osd_json['pools']:
436 if pool['pool_name'] == pool_name:
437 return pool['cache_mode']
438 return None
439 except ValueError:
440 raise
441
442
443def pool_exists(service, name):
444 """Check to see if a RADOS pool already exists."""
445 try:
446 out = check_output(['rados', '--id', service,
447 'lspools']).decode('UTF-8')
448 except CalledProcessError:
449 return False
450
451 return name in out
452
453
454def get_osds(service):
455 """Return a list of all Ceph Object Storage Daemons currently in the
456 cluster.
457 """
458 version = ceph_version()
459 if version and version >= '0.56':
460 return json.loads(check_output(['ceph', '--id', service,
461 'osd', 'ls',
462 '--format=json']).decode('UTF-8'))
463
464 return None
75465
76466
77def install():467def install():
@@ -101,53 +491,37 @@
101 check_call(cmd)491 check_call(cmd)
102492
103493
104def pool_exists(service, name):494def update_pool(client, pool, settings):
105 """Check to see if a RADOS pool already exists."""495 cmd = ['ceph', '--id', client, 'osd', 'pool', 'set', pool]
106 try:496 for k, v in six.iteritems(settings):
107 out = check_output(['rados', '--id', service,497 cmd.append(k)
108 'lspools']).decode('UTF-8')498 cmd.append(v)
109 except CalledProcessError:499
110 return False500 check_call(cmd)
111501
112 return name in out502
113503def create_pool(service, name, replicas=3, pg_num=None):
114
115def get_osds(service):
116 """Return a list of all Ceph Object Storage Daemons currently in the
117 cluster.
118 """
119 version = ceph_version()
120 if version and version >= '0.56':
121 return json.loads(check_output(['ceph', '--id', service,
122 'osd', 'ls',
123 '--format=json']).decode('UTF-8'))
124
125 return None
126
127
128def create_pool(service, name, replicas=3):
129 """Create a new RADOS pool."""504 """Create a new RADOS pool."""
130 if pool_exists(service, name):505 if pool_exists(service, name):
131 log("Ceph pool {} already exists, skipping creation".format(name),506 log("Ceph pool {} already exists, skipping creation".format(name),
132 level=WARNING)507 level=WARNING)
133 return508 return
134509
135 # Calculate the number of placement groups based510 if not pg_num:
136 # on upstream recommended best practices.511 # Calculate the number of placement groups based
137 osds = get_osds(service)512 # on upstream recommended best practices.
138 if osds:513 osds = get_osds(service)
139 pgnum = (len(osds) * 100 // replicas)514 if osds:
140 else:515 pg_num = (len(osds) * 100 // replicas)
141 # NOTE(james-page): Default to 200 for older ceph versions516 else:
142 # which don't support OSD query from cli517 # NOTE(james-page): Default to 200 for older ceph versions
143 pgnum = 200518 # which don't support OSD query from cli
144519 pg_num = 200
145 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]520
146 check_call(cmd)521 cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pg_num)]
147522 check_call(cmd)
148 cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',523
149 str(replicas)]524 update_pool(service, name, settings={'size': str(replicas)})
150 check_call(cmd)
151525
152526
153def delete_pool(service, name):527def delete_pool(service, name):
@@ -202,10 +576,10 @@
202 log('Created new keyfile at %s.' % keyfile, level=INFO)576 log('Created new keyfile at %s.' % keyfile, level=INFO)
203577
204578
205def get_ceph_nodes():579def get_ceph_nodes(relation='ceph'):
206 """Query named relation 'ceph' to determine current nodes."""580 """Query named relation to determine current nodes."""
207 hosts = []581 hosts = []
208 for r_id in relation_ids('ceph'):582 for r_id in relation_ids(relation):
209 for unit in related_units(r_id):583 for unit in related_units(r_id):
210 hosts.append(relation_get('private-address', unit=unit, rid=r_id))584 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
211585
@@ -357,14 +731,14 @@
357 service_start(svc)731 service_start(svc)
358732
359733
360def ensure_ceph_keyring(service, user=None, group=None):734def ensure_ceph_keyring(service, user=None, group=None, relation='ceph'):
361 """Ensures a ceph keyring is created for a named service and optionally735 """Ensures a ceph keyring is created for a named service and optionally
362 ensures user and group ownership.736 ensures user and group ownership.
363737
364 Returns False if no ceph key is available in relation state.738 Returns False if no ceph key is available in relation state.
365 """739 """
366 key = None740 key = None
367 for rid in relation_ids('ceph'):741 for rid in relation_ids(relation):
368 for unit in related_units(rid):742 for unit in related_units(rid):
369 key = relation_get('key', rid=rid, unit=unit)743 key = relation_get('key', rid=rid, unit=unit)
370 if key:744 if key:
@@ -405,6 +779,7 @@
405779
406 The API is versioned and defaults to version 1.780 The API is versioned and defaults to version 1.
407 """781 """
782
408 def __init__(self, api_version=1, request_id=None):783 def __init__(self, api_version=1, request_id=None):
409 self.api_version = api_version784 self.api_version = api_version
410 if request_id:785 if request_id:
@@ -413,9 +788,16 @@
413 self.request_id = str(uuid.uuid1())788 self.request_id = str(uuid.uuid1())
414 self.ops = []789 self.ops = []
415790
416 def add_op_create_pool(self, name, replica_count=3):791 def add_op_create_pool(self, name, replica_count=3, pg_num=None):
792 """Adds an operation to create a pool.
793
794 @param pg_num setting: optional setting. If not provided, this value
795 will be calculated by the broker based on how many OSDs are in the
796 cluster at the time of creation. Note that, if provided, this value
797 will be capped at the current available maximum.
798 """
417 self.ops.append({'op': 'create-pool', 'name': name,799 self.ops.append({'op': 'create-pool', 'name': name,
418 'replicas': replica_count})800 'replicas': replica_count, 'pg_num': pg_num})
419801
420 def set_ops(self, ops):802 def set_ops(self, ops):
421 """Set request ops to provided value.803 """Set request ops to provided value.
@@ -433,8 +815,8 @@
433 def _ops_equal(self, other):815 def _ops_equal(self, other):
434 if len(self.ops) == len(other.ops):816 if len(self.ops) == len(other.ops):
435 for req_no in range(0, len(self.ops)):817 for req_no in range(0, len(self.ops)):
436 for key in ['replicas', 'name', 'op']:818 for key in ['replicas', 'name', 'op', 'pg_num']:
437 if self.ops[req_no][key] != other.ops[req_no][key]:819 if self.ops[req_no].get(key) != other.ops[req_no].get(key):
438 return False820 return False
439 else:821 else:
440 return False822 return False
@@ -540,7 +922,7 @@
540 return request922 return request
541923
542924
543def get_request_states(request):925def get_request_states(request, relation='ceph'):
544 """Return a dict of requests per relation id with their corresponding926 """Return a dict of requests per relation id with their corresponding
545 completion state.927 completion state.
546928
@@ -552,7 +934,7 @@
552 """934 """
553 complete = []935 complete = []
554 requests = {}936 requests = {}
555 for rid in relation_ids('ceph'):937 for rid in relation_ids(relation):
556 complete = False938 complete = False
557 previous_request = get_previous_request(rid)939 previous_request = get_previous_request(rid)
558 if request == previous_request:940 if request == previous_request:
@@ -570,14 +952,14 @@
570 return requests952 return requests
571953
572954
573def is_request_sent(request):955def is_request_sent(request, relation='ceph'):
574 """Check to see if a functionally equivalent request has already been sent956 """Check to see if a functionally equivalent request has already been sent
575957
576 Returns True if a similair request has been sent958 Returns True if a similair request has been sent
577959
578 @param request: A CephBrokerRq object960 @param request: A CephBrokerRq object
579 """961 """
580 states = get_request_states(request)962 states = get_request_states(request, relation=relation)
581 for rid in states.keys():963 for rid in states.keys():
582 if not states[rid]['sent']:964 if not states[rid]['sent']:
583 return False965 return False
@@ -585,7 +967,7 @@
585 return True967 return True
586968
587969
588def is_request_complete(request):970def is_request_complete(request, relation='ceph'):
589 """Check to see if a functionally equivalent request has already been971 """Check to see if a functionally equivalent request has already been
590 completed972 completed
591973
@@ -593,7 +975,7 @@
593975
594 @param request: A CephBrokerRq object976 @param request: A CephBrokerRq object
595 """977 """
596 states = get_request_states(request)978 states = get_request_states(request, relation=relation)
597 for rid in states.keys():979 for rid in states.keys():
598 if not states[rid]['complete']:980 if not states[rid]['complete']:
599 return False981 return False
@@ -643,15 +1025,15 @@
643 return 'broker-rsp-' + local_unit().replace('/', '-')1025 return 'broker-rsp-' + local_unit().replace('/', '-')
6441026
6451027
646def send_request_if_needed(request):1028def send_request_if_needed(request, relation='ceph'):
647 """Send broker request if an equivalent request has not already been sent1029 """Send broker request if an equivalent request has not already been sent
6481030
649 @param request: A CephBrokerRq object1031 @param request: A CephBrokerRq object
650 """1032 """
651 if is_request_sent(request):1033 if is_request_sent(request, relation=relation):
652 log('Request already sent but not complete, not sending new request',1034 log('Request already sent but not complete, not sending new request',
653 level=DEBUG)1035 level=DEBUG)
654 else:1036 else:
655 for rid in relation_ids('ceph'):1037 for rid in relation_ids(relation):
656 log('Sending request {}'.format(request.request_id), level=DEBUG)1038 log('Sending request {}'.format(request.request_id), level=DEBUG)
657 relation_set(relation_id=rid, broker_req=request.request)1039 relation_set(relation_id=rid, broker_req=request.request)
6581040
=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2016-01-22 14:18:52 +0000
@@ -76,3 +76,13 @@
76 check_call(cmd)76 check_call(cmd)
7777
78 return create_loopback(path)78 return create_loopback(path)
79
80
81def is_mapped_loopback_device(device):
82 """
83 Checks if a given device name is an existing/mapped loopback device.
84 :param device: str: Full path to the device (eg, /dev/loop1).
85 :returns: str: Path to the backing file if is a loopback device
86 empty string otherwise
87 """
88 return loopback_devices().get(device, "")
7989
=== modified file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/hookenv.py 2016-01-22 14:18:52 +0000
@@ -492,7 +492,7 @@
492492
493@cached493@cached
494def peer_relation_id():494def peer_relation_id():
495 '''Get a peer relation id if a peer relation has been joined, else None.'''495 '''Get the peers relation id if a peers relation has been joined, else None.'''
496 md = metadata()496 md = metadata()
497 section = md.get('peers')497 section = md.get('peers')
498 if section:498 if section:
@@ -517,12 +517,12 @@
517def relation_to_role_and_interface(relation_name):517def relation_to_role_and_interface(relation_name):
518 """518 """
519 Given the name of a relation, return the role and the name of the interface519 Given the name of a relation, return the role and the name of the interface
520 that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).520 that relation uses (where role is one of ``provides``, ``requires``, or ``peers``).
521521
522 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.522 :returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
523 """523 """
524 _metadata = metadata()524 _metadata = metadata()
525 for role in ('provides', 'requires', 'peer'):525 for role in ('provides', 'requires', 'peers'):
526 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')526 interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
527 if interface:527 if interface:
528 return role, interface528 return role, interface
@@ -534,7 +534,7 @@
534 """534 """
535 Given a role and interface name, return a list of relation names for the535 Given a role and interface name, return a list of relation names for the
536 current charm that use that interface under that role (where role is one536 current charm that use that interface under that role (where role is one
537 of ``provides``, ``requires``, or ``peer``).537 of ``provides``, ``requires``, or ``peers``).
538538
539 :returns: A list of relation names.539 :returns: A list of relation names.
540 """540 """
@@ -555,7 +555,7 @@
555 :returns: A list of relation names.555 :returns: A list of relation names.
556 """556 """
557 results = []557 results = []
558 for role in ('provides', 'requires', 'peer'):558 for role in ('provides', 'requires', 'peers'):
559 results.extend(role_and_interface_to_relations(role, interface_name))559 results.extend(role_and_interface_to_relations(role, interface_name))
560 return results560 return results
561561
@@ -637,7 +637,7 @@
637637
638638
639@cached639@cached
640def storage_get(attribute="", storage_id=""):640def storage_get(attribute=None, storage_id=None):
641 """Get storage attributes"""641 """Get storage attributes"""
642 _args = ['storage-get', '--format=json']642 _args = ['storage-get', '--format=json']
643 if storage_id:643 if storage_id:
@@ -651,7 +651,7 @@
651651
652652
653@cached653@cached
654def storage_list(storage_name=""):654def storage_list(storage_name=None):
655 """List the storage IDs for the unit"""655 """List the storage IDs for the unit"""
656 _args = ['storage-list', '--format=json']656 _args = ['storage-list', '--format=json']
657 if storage_name:657 if storage_name:
@@ -878,6 +878,40 @@
878 subprocess.check_call(cmd)878 subprocess.check_call(cmd)
879879
880880
881@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
882def payload_register(ptype, klass, pid):
883 """ is used while a hook is running to let Juju know that a
884 payload has been started."""
885 cmd = ['payload-register']
886 for x in [ptype, klass, pid]:
887 cmd.append(x)
888 subprocess.check_call(cmd)
889
890
891@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
892def payload_unregister(klass, pid):
893 """ is used while a hook is running to let Juju know
894 that a payload has been manually stopped. The <class> and <id> provided
895 must match a payload that has been previously registered with juju using
896 payload-register."""
897 cmd = ['payload-unregister']
898 for x in [klass, pid]:
899 cmd.append(x)
900 subprocess.check_call(cmd)
901
902
903@translate_exc(from_exc=OSError, to_exc=NotImplementedError)
904def payload_status_set(klass, pid, status):
905 """is used to update the current status of a registered payload.
906 The <class> and <id> provided must match a payload that has been previously
907 registered with juju using payload-register. The <status> must be one of the
908 follow: starting, started, stopping, stopped"""
909 cmd = ['payload-status-set']
910 for x in [klass, pid, status]:
911 cmd.append(x)
912 subprocess.check_call(cmd)
913
914
881@cached915@cached
882def juju_version():916def juju_version():
883 """Full version string (eg. '1.23.3.1-trusty-amd64')"""917 """Full version string (eg. '1.23.3.1-trusty-amd64')"""
884918
=== modified file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/host.py 2016-01-22 14:18:52 +0000
@@ -67,10 +67,14 @@
67 """Pause a system service.67 """Pause a system service.
6868
69 Stop it, and prevent it from starting again at boot."""69 Stop it, and prevent it from starting again at boot."""
70 stopped = service_stop(service_name)70 stopped = True
71 if service_running(service_name):
72 stopped = service_stop(service_name)
71 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))73 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
72 sysv_file = os.path.join(initd_dir, service_name)74 sysv_file = os.path.join(initd_dir, service_name)
73 if os.path.exists(upstart_file):75 if init_is_systemd():
76 service('disable', service_name)
77 elif os.path.exists(upstart_file):
74 override_path = os.path.join(78 override_path = os.path.join(
75 init_dir, '{}.override'.format(service_name))79 init_dir, '{}.override'.format(service_name))
76 with open(override_path, 'w') as fh:80 with open(override_path, 'w') as fh:
@@ -78,9 +82,9 @@
78 elif os.path.exists(sysv_file):82 elif os.path.exists(sysv_file):
79 subprocess.check_call(["update-rc.d", service_name, "disable"])83 subprocess.check_call(["update-rc.d", service_name, "disable"])
80 else:84 else:
81 # XXX: Support SystemD too
82 raise ValueError(85 raise ValueError(
83 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(86 "Unable to detect {0} as SystemD, Upstart {1} or"
87 " SysV {2}".format(
84 service_name, upstart_file, sysv_file))88 service_name, upstart_file, sysv_file))
85 return stopped89 return stopped
8690
@@ -92,7 +96,9 @@
92 Reenable starting again at boot. Start the service"""96 Reenable starting again at boot. Start the service"""
93 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))97 upstart_file = os.path.join(init_dir, "{}.conf".format(service_name))
94 sysv_file = os.path.join(initd_dir, service_name)98 sysv_file = os.path.join(initd_dir, service_name)
95 if os.path.exists(upstart_file):99 if init_is_systemd():
100 service('enable', service_name)
101 elif os.path.exists(upstart_file):
96 override_path = os.path.join(102 override_path = os.path.join(
97 init_dir, '{}.override'.format(service_name))103 init_dir, '{}.override'.format(service_name))
98 if os.path.exists(override_path):104 if os.path.exists(override_path):
@@ -100,34 +106,43 @@
100 elif os.path.exists(sysv_file):106 elif os.path.exists(sysv_file):
101 subprocess.check_call(["update-rc.d", service_name, "enable"])107 subprocess.check_call(["update-rc.d", service_name, "enable"])
102 else:108 else:
103 # XXX: Support SystemD too
104 raise ValueError(109 raise ValueError(
105 "Unable to detect {0} as either Upstart {1} or SysV {2}".format(110 "Unable to detect {0} as SystemD, Upstart {1} or"
111 " SysV {2}".format(
106 service_name, upstart_file, sysv_file))112 service_name, upstart_file, sysv_file))
107113
108 started = service_start(service_name)114 started = service_running(service_name)
115 if not started:
116 started = service_start(service_name)
109 return started117 return started
110118
111119
112def service(action, service_name):120def service(action, service_name):
113 """Control a system service"""121 """Control a system service"""
114 cmd = ['service', service_name, action]122 if init_is_systemd():
123 cmd = ['systemctl', action, service_name]
124 else:
125 cmd = ['service', service_name, action]
115 return subprocess.call(cmd) == 0126 return subprocess.call(cmd) == 0
116127
117128
118def service_running(service):129def service_running(service_name):
119 """Determine whether a system service is running"""130 """Determine whether a system service is running"""
120 try:131 if init_is_systemd():
121 output = subprocess.check_output(132 return service('is-active', service_name)
122 ['service', service, 'status'],
123 stderr=subprocess.STDOUT).decode('UTF-8')
124 except subprocess.CalledProcessError:
125 return False
126 else:133 else:
127 if ("start/running" in output or "is running" in output):134 try:
128 return True135 output = subprocess.check_output(
129 else:136 ['service', service_name, 'status'],
137 stderr=subprocess.STDOUT).decode('UTF-8')
138 except subprocess.CalledProcessError:
130 return False139 return False
140 else:
141 if ("start/running" in output or "is running" in output or
142 "up and running" in output):
143 return True
144 else:
145 return False
131146
132147
133def service_available(service_name):148def service_available(service_name):
@@ -142,8 +157,29 @@
142 return True157 return True
143158
144159
145def adduser(username, password=None, shell='/bin/bash', system_user=False):160SYSTEMD_SYSTEM = '/run/systemd/system'
146 """Add a user to the system"""161
162
163def init_is_systemd():
164 """Return True if the host system uses systemd, False otherwise."""
165 return os.path.isdir(SYSTEMD_SYSTEM)
166
167
168def adduser(username, password=None, shell='/bin/bash', system_user=False,
169 primary_group=None, secondary_groups=None):
170 """Add a user to the system.
171
172 Will log but otherwise succeed if the user already exists.
173
174 :param str username: Username to create
175 :param str password: Password for user; if ``None``, create a system user
176 :param str shell: The default shell for the user
177 :param bool system_user: Whether to create a login or system user
178 :param str primary_group: Primary group for user; defaults to username
179 :param list secondary_groups: Optional list of additional groups
180
181 :returns: The password database entry struct, as returned by `pwd.getpwnam`
182 """
147 try:183 try:
148 user_info = pwd.getpwnam(username)184 user_info = pwd.getpwnam(username)
149 log('user {0} already exists!'.format(username))185 log('user {0} already exists!'.format(username))
@@ -158,6 +194,16 @@
158 '--shell', shell,194 '--shell', shell,
159 '--password', password,195 '--password', password,
160 ])196 ])
197 if not primary_group:
198 try:
199 grp.getgrnam(username)
200 primary_group = username # avoid "group exists" error
201 except KeyError:
202 pass
203 if primary_group:
204 cmd.extend(['-g', primary_group])
205 if secondary_groups:
206 cmd.extend(['-G', ','.join(secondary_groups)])
161 cmd.append(username)207 cmd.append(username)
162 subprocess.check_call(cmd)208 subprocess.check_call(cmd)
163 user_info = pwd.getpwnam(username)209 user_info = pwd.getpwnam(username)
@@ -255,14 +301,12 @@
255301
256302
257def fstab_remove(mp):303def fstab_remove(mp):
258 """Remove the given mountpoint entry from /etc/fstab304 """Remove the given mountpoint entry from /etc/fstab"""
259 """
260 return Fstab.remove_by_mountpoint(mp)305 return Fstab.remove_by_mountpoint(mp)
261306
262307
263def fstab_add(dev, mp, fs, options=None):308def fstab_add(dev, mp, fs, options=None):
264 """Adds the given device entry to the /etc/fstab file309 """Adds the given device entry to the /etc/fstab file"""
265 """
266 return Fstab.add(dev, mp, fs, options=options)310 return Fstab.add(dev, mp, fs, options=options)
267311
268312
@@ -318,8 +362,7 @@
318362
319363
320def file_hash(path, hash_type='md5'):364def file_hash(path, hash_type='md5'):
321 """365 """Generate a hash checksum of the contents of 'path' or None if not found.
322 Generate a hash checksum of the contents of 'path' or None if not found.
323366
324 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,367 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
325 such as md5, sha1, sha256, sha512, etc.368 such as md5, sha1, sha256, sha512, etc.
@@ -334,10 +377,9 @@
334377
335378
336def path_hash(path):379def path_hash(path):
337 """380 """Generate a hash checksum of all files matching 'path'. Standard
338 Generate a hash checksum of all files matching 'path'. Standard wildcards381 wildcards like '*' and '?' are supported, see documentation for the 'glob'
339 like '*' and '?' are supported, see documentation for the 'glob' module for382 module for more information.
340 more information.
341383
342 :return: dict: A { filename: hash } dictionary for all matched files.384 :return: dict: A { filename: hash } dictionary for all matched files.
343 Empty if none found.385 Empty if none found.
@@ -349,8 +391,7 @@
349391
350392
351def check_hash(path, checksum, hash_type='md5'):393def check_hash(path, checksum, hash_type='md5'):
352 """394 """Validate a file using a cryptographic checksum.
353 Validate a file using a cryptographic checksum.
354395
355 :param str checksum: Value of the checksum used to validate the file.396 :param str checksum: Value of the checksum used to validate the file.
356 :param str hash_type: Hash algorithm used to generate `checksum`.397 :param str hash_type: Hash algorithm used to generate `checksum`.
@@ -365,6 +406,7 @@
365406
366407
367class ChecksumError(ValueError):408class ChecksumError(ValueError):
409 """A class derived from Value error to indicate the checksum failed."""
368 pass410 pass
369411
370412
@@ -470,7 +512,7 @@
470512
471513
472def list_nics(nic_type=None):514def list_nics(nic_type=None):
473 '''Return a list of nics of given type(s)'''515 """Return a list of nics of given type(s)"""
474 if isinstance(nic_type, six.string_types):516 if isinstance(nic_type, six.string_types):
475 int_types = [nic_type]517 int_types = [nic_type]
476 else:518 else:
@@ -512,12 +554,13 @@
512554
513555
514def set_nic_mtu(nic, mtu):556def set_nic_mtu(nic, mtu):
515 '''Set MTU on a network interface'''557 """Set the Maximum Transmission Unit (MTU) on a network interface."""
516 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]558 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
517 subprocess.check_call(cmd)559 subprocess.check_call(cmd)
518560
519561
520def get_nic_mtu(nic):562def get_nic_mtu(nic):
563 """Return the Maximum Transmission Unit (MTU) for a network interface."""
521 cmd = ['ip', 'addr', 'show', nic]564 cmd = ['ip', 'addr', 'show', nic]
522 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')565 ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
523 mtu = ""566 mtu = ""
@@ -529,6 +572,7 @@
529572
530573
531def get_nic_hwaddr(nic):574def get_nic_hwaddr(nic):
575 """Return the Media Access Control (MAC) for a network interface."""
532 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]576 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
533 ip_output = subprocess.check_output(cmd).decode('UTF-8')577 ip_output = subprocess.check_output(cmd).decode('UTF-8')
534 hwaddr = ""578 hwaddr = ""
@@ -539,7 +583,7 @@
539583
540584
541def cmp_pkgrevno(package, revno, pkgcache=None):585def cmp_pkgrevno(package, revno, pkgcache=None):
542 '''Compare supplied revno with the revno of the installed package586 """Compare supplied revno with the revno of the installed package
543587
544 * 1 => Installed revno is greater than supplied arg588 * 1 => Installed revno is greater than supplied arg
545 * 0 => Installed revno is the same as supplied arg589 * 0 => Installed revno is the same as supplied arg
@@ -548,7 +592,7 @@
548 This function imports apt_cache function from charmhelpers.fetch if592 This function imports apt_cache function from charmhelpers.fetch if
549 the pkgcache argument is None. Be sure to add charmhelpers.fetch if593 the pkgcache argument is None. Be sure to add charmhelpers.fetch if
550 you call this function, or pass an apt_pkg.Cache() instance.594 you call this function, or pass an apt_pkg.Cache() instance.
551 '''595 """
552 import apt_pkg596 import apt_pkg
553 if not pkgcache:597 if not pkgcache:
554 from charmhelpers.fetch import apt_cache598 from charmhelpers.fetch import apt_cache
@@ -558,19 +602,27 @@
558602
559603
560@contextmanager604@contextmanager
561def chdir(d):605def chdir(directory):
606 """Change the current working directory to a different directory for a code
607 block and return the previous directory after the block exits. Useful to
608 run commands from a specificed directory.
609
610 :param str directory: The directory path to change to for this context.
611 """
562 cur = os.getcwd()612 cur = os.getcwd()
563 try:613 try:
564 yield os.chdir(d)614 yield os.chdir(directory)
565 finally:615 finally:
566 os.chdir(cur)616 os.chdir(cur)
567617
568618
569def chownr(path, owner, group, follow_links=True, chowntopdir=False):619def chownr(path, owner, group, follow_links=True, chowntopdir=False):
570 """620 """Recursively change user and group ownership of files and directories
571 Recursively change user and group ownership of files and directories
572 in given path. Doesn't chown path itself by default, only its children.621 in given path. Doesn't chown path itself by default, only its children.
573622
623 :param str path: The string path to start changing ownership.
624 :param str owner: The owner string to use when looking up the uid.
625 :param str group: The group string to use when looking up the gid.
574 :param bool follow_links: Also Chown links if True626 :param bool follow_links: Also Chown links if True
575 :param bool chowntopdir: Also chown path itself if True627 :param bool chowntopdir: Also chown path itself if True
576 """628 """
@@ -594,15 +646,23 @@
594646
595647
596def lchownr(path, owner, group):648def lchownr(path, owner, group):
649 """Recursively change user and group ownership of files and directories
650 in a given path, not following symbolic links. See the documentation for
651 'os.lchown' for more information.
652
653 :param str path: The string path to start changing ownership.
654 :param str owner: The owner string to use when looking up the uid.
655 :param str group: The group string to use when looking up the gid.
656 """
597 chownr(path, owner, group, follow_links=False)657 chownr(path, owner, group, follow_links=False)
598658
599659
600def get_total_ram():660def get_total_ram():
601 '''The total amount of system RAM in bytes.661 """The total amount of system RAM in bytes.
602662
603 This is what is reported by the OS, and may be overcommitted when663 This is what is reported by the OS, and may be overcommitted when
604 there are multiple containers hosted on the same machine.664 there are multiple containers hosted on the same machine.
605 '''665 """
606 with open('/proc/meminfo', 'r') as f:666 with open('/proc/meminfo', 'r') as f:
607 for line in f.readlines():667 for line in f.readlines():
608 if line:668 if line:
609669
=== modified file 'hooks/charmhelpers/core/services/helpers.py'
--- hooks/charmhelpers/core/services/helpers.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/services/helpers.py 2016-01-22 14:18:52 +0000
@@ -243,13 +243,15 @@
243 :param str source: The template source file, relative to243 :param str source: The template source file, relative to
244 `$CHARM_DIR/templates`244 `$CHARM_DIR/templates`
245245
246 :param str target: The target to write the rendered template to246 :param str target: The target to write the rendered template to (or None)
247 :param str owner: The owner of the rendered file247 :param str owner: The owner of the rendered file
248 :param str group: The group of the rendered file248 :param str group: The group of the rendered file
249 :param int perms: The permissions of the rendered file249 :param int perms: The permissions of the rendered file
250 :param partial on_change_action: functools partial to be executed when250 :param partial on_change_action: functools partial to be executed when
251 rendered file changes251 rendered file changes
252 :param jinja2 loader template_loader: A jinja2 template loader252 :param jinja2 loader template_loader: A jinja2 template loader
253
254 :return str: The rendered template
253 """255 """
254 def __init__(self, source, target,256 def __init__(self, source, target,
255 owner='root', group='root', perms=0o444,257 owner='root', group='root', perms=0o444,
@@ -267,12 +269,14 @@
267 if self.on_change_action and os.path.isfile(self.target):269 if self.on_change_action and os.path.isfile(self.target):
268 pre_checksum = host.file_hash(self.target)270 pre_checksum = host.file_hash(self.target)
269 service = manager.get_service(service_name)271 service = manager.get_service(service_name)
270 context = {}272 context = {'ctx': {}}
271 for ctx in service.get('required_data', []):273 for ctx in service.get('required_data', []):
272 context.update(ctx)274 context.update(ctx)
273 templating.render(self.source, self.target, context,275 context['ctx'].update(ctx)
274 self.owner, self.group, self.perms,276
275 template_loader=self.template_loader)277 result = templating.render(self.source, self.target, context,
278 self.owner, self.group, self.perms,
279 template_loader=self.template_loader)
276 if self.on_change_action:280 if self.on_change_action:
277 if pre_checksum == host.file_hash(self.target):281 if pre_checksum == host.file_hash(self.target):
278 hookenv.log(282 hookenv.log(
@@ -281,6 +285,8 @@
281 else:285 else:
282 self.on_change_action()286 self.on_change_action()
283287
288 return result
289
284290
285# Convenience aliases for templates291# Convenience aliases for templates
286render_template = template = TemplateCallback292render_template = template = TemplateCallback
287293
=== modified file 'hooks/charmhelpers/core/templating.py'
--- hooks/charmhelpers/core/templating.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/core/templating.py 2016-01-22 14:18:52 +0000
@@ -27,7 +27,8 @@
2727
28 The `source` path, if not absolute, is relative to the `templates_dir`.28 The `source` path, if not absolute, is relative to the `templates_dir`.
2929
30 The `target` path should be absolute.30 The `target` path should be absolute. It can also be `None`, in which
31 case no file will be written.
3132
32 The context should be a dict containing the values to be replaced in the33 The context should be a dict containing the values to be replaced in the
33 template.34 template.
@@ -36,6 +37,9 @@
3637
37 If omitted, `templates_dir` defaults to the `templates` folder in the charm.38 If omitted, `templates_dir` defaults to the `templates` folder in the charm.
3839
40 The rendered template will be written to the file as well as being returned
41 as a string.
42
39 Note: Using this requires python-jinja2; if it is not installed, calling43 Note: Using this requires python-jinja2; if it is not installed, calling
40 this will attempt to use charmhelpers.fetch.apt_install to install it.44 this will attempt to use charmhelpers.fetch.apt_install to install it.
41 """45 """
@@ -67,9 +71,11 @@
67 level=hookenv.ERROR)71 level=hookenv.ERROR)
68 raise e72 raise e
69 content = template.render(context)73 content = template.render(context)
70 target_dir = os.path.dirname(target)74 if target is not None:
71 if not os.path.exists(target_dir):75 target_dir = os.path.dirname(target)
72 # This is a terrible default directory permission, as the file76 if not os.path.exists(target_dir):
73 # or its siblings will often contain secrets.77 # This is a terrible default directory permission, as the file
74 host.mkdir(os.path.dirname(target), owner, group, perms=0o755)78 # or its siblings will often contain secrets.
75 host.write_file(target, content.encode(encoding), owner, group, perms)79 host.mkdir(os.path.dirname(target), owner, group, perms=0o755)
80 host.write_file(target, content.encode(encoding), owner, group, perms)
81 return content
7682
=== modified file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2015-11-11 14:57:38 +0000
+++ hooks/charmhelpers/fetch/__init__.py 2016-01-22 14:18:52 +0000
@@ -98,6 +98,14 @@
98 'liberty/proposed': 'trusty-proposed/liberty',98 'liberty/proposed': 'trusty-proposed/liberty',
99 'trusty-liberty/proposed': 'trusty-proposed/liberty',99 'trusty-liberty/proposed': 'trusty-proposed/liberty',
100 'trusty-proposed/liberty': 'trusty-proposed/liberty',100 'trusty-proposed/liberty': 'trusty-proposed/liberty',
101 # Mitaka
102 'mitaka': 'trusty-updates/mitaka',
103 'trusty-mitaka': 'trusty-updates/mitaka',
104 'trusty-mitaka/updates': 'trusty-updates/mitaka',
105 'trusty-updates/mitaka': 'trusty-updates/mitaka',
106 'mitaka/proposed': 'trusty-proposed/mitaka',
107 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
108 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
101}109}
102110
103# The order of this list is very important. Handlers should be listed in from111# The order of this list is very important. Handlers should be listed in from
@@ -411,7 +419,7 @@
411 importlib.import_module(package),419 importlib.import_module(package),
412 classname)420 classname)
413 plugin_list.append(handler_class())421 plugin_list.append(handler_class())
414 except (ImportError, AttributeError):422 except NotImplementedError:
415 # Skip missing plugins so that they can be ommitted from423 # Skip missing plugins so that they can be ommitted from
416 # installation if desired424 # installation if desired
417 log("FetchHandler {} not found, skipping plugin".format(425 log("FetchHandler {} not found, skipping plugin".format(
418426
=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 2016-01-22 14:18:52 +0000
@@ -108,7 +108,7 @@
108 install_opener(opener)108 install_opener(opener)
109 response = urlopen(source)109 response = urlopen(source)
110 try:110 try:
111 with open(dest, 'w') as dest_file:111 with open(dest, 'wb') as dest_file:
112 dest_file.write(response.read())112 dest_file.write(response.read())
113 except Exception as e:113 except Exception as e:
114 if os.path.isfile(dest):114 if os.path.isfile(dest):
115115
=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2015-02-19 22:08:13 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 2016-01-22 14:18:52 +0000
@@ -15,60 +15,50 @@
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import os17import os
18from subprocess import check_call
18from charmhelpers.fetch import (19from charmhelpers.fetch import (
19 BaseFetchHandler,20 BaseFetchHandler,
20 UnhandledSource21 UnhandledSource,
22 filter_installed_packages,
23 apt_install,
21)24)
22from charmhelpers.core.host import mkdir25from charmhelpers.core.host import mkdir
2326
24import six
25if six.PY3:
26 raise ImportError('bzrlib does not support Python3')
2727
28try:28if filter_installed_packages(['bzr']) != []:
29 from bzrlib.branch import Branch29 apt_install(['bzr'])
30 from bzrlib import bzrdir, workingtree, errors30 if filter_installed_packages(['bzr']) != []:
31except ImportError:31 raise NotImplementedError('Unable to install bzr')
32 from charmhelpers.fetch import apt_install
33 apt_install("python-bzrlib")
34 from bzrlib.branch import Branch
35 from bzrlib import bzrdir, workingtree, errors
3632
3733
38class BzrUrlFetchHandler(BaseFetchHandler):34class BzrUrlFetchHandler(BaseFetchHandler):
39 """Handler for bazaar branches via generic and lp URLs"""35 """Handler for bazaar branches via generic and lp URLs"""
40 def can_handle(self, source):36 def can_handle(self, source):
41 url_parts = self.parse_url(source)37 url_parts = self.parse_url(source)
42 if url_parts.scheme not in ('bzr+ssh', 'lp'):38 if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
43 return False39 return False
40 elif not url_parts.scheme:
41 return os.path.exists(os.path.join(source, '.bzr'))
44 else:42 else:
45 return True43 return True
4644
47 def branch(self, source, dest):45 def branch(self, source, dest):
48 url_parts = self.parse_url(source)
49 # If we use lp:branchname scheme we need to load plugins
50 if not self.can_handle(source):46 if not self.can_handle(source):
51 raise UnhandledSource("Cannot handle {}".format(source))47 raise UnhandledSource("Cannot handle {}".format(source))
52 if url_parts.scheme == "lp":48 if os.path.exists(dest):
53 from bzrlib.plugin import load_plugins49 check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
54 load_plugins()50 else:
55 try:51 check_call(['bzr', 'branch', source, dest])
56 local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
57 except errors.AlreadyControlDirError:
58 local_branch = Branch.open(dest)
59 try:
60 remote_branch = Branch.open(source)
61 remote_branch.push(local_branch)
62 tree = workingtree.WorkingTree.open(dest)
63 tree.update()
64 except Exception as e:
65 raise e
6652
67 def install(self, source):53 def install(self, source, dest=None):
68 url_parts = self.parse_url(source)54 url_parts = self.parse_url(source)
69 branch_name = url_parts.path.strip("/").split("/")[-1]55 branch_name = url_parts.path.strip("/").split("/")[-1]
70 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",56 if dest:
71 branch_name)57 dest_dir = os.path.join(dest, branch_name)
58 else:
59 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
60 branch_name)
61
72 if not os.path.exists(dest_dir):62 if not os.path.exists(dest_dir):
73 mkdir(dest_dir, perms=0o755)63 mkdir(dest_dir, perms=0o755)
74 try:64 try:
7565
=== modified file 'hooks/charmhelpers/fetch/giturl.py'
--- hooks/charmhelpers/fetch/giturl.py 2015-07-22 12:10:31 +0000
+++ hooks/charmhelpers/fetch/giturl.py 2016-01-22 14:18:52 +0000
@@ -15,24 +15,18 @@
15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.15# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1616
17import os17import os
18from subprocess import check_call, CalledProcessError
18from charmhelpers.fetch import (19from charmhelpers.fetch import (
19 BaseFetchHandler,20 BaseFetchHandler,
20 UnhandledSource21 UnhandledSource,
22 filter_installed_packages,
23 apt_install,
21)24)
22from charmhelpers.core.host import mkdir25
2326if filter_installed_packages(['git']) != []:
24import six27 apt_install(['git'])
25if six.PY3:28 if filter_installed_packages(['git']) != []:
26 raise ImportError('GitPython does not support Python 3')29 raise NotImplementedError('Unable to install git')
27
28try:
29 from git import Repo
30except ImportError:
31 from charmhelpers.fetch import apt_install
32 apt_install("python-git")
33 from git import Repo
34
35from git.exc import GitCommandError # noqa E402
3630
3731
38class GitUrlFetchHandler(BaseFetchHandler):32class GitUrlFetchHandler(BaseFetchHandler):
@@ -40,19 +34,24 @@
40 def can_handle(self, source):34 def can_handle(self, source):
41 url_parts = self.parse_url(source)35 url_parts = self.parse_url(source)
42 # TODO (mattyw) no support for ssh git@ yet36 # TODO (mattyw) no support for ssh git@ yet
43 if url_parts.scheme not in ('http', 'https', 'git'):37 if url_parts.scheme not in ('http', 'https', 'git', ''):
44 return False38 return False
39 elif not url_parts.scheme:
40 return os.path.exists(os.path.join(source, '.git'))
45 else:41 else:
46 return True42 return True
4743
48 def clone(self, source, dest, branch, depth=None):44 def clone(self, source, dest, branch="master", depth=None):
49 if not self.can_handle(source):45 if not self.can_handle(source):
50 raise UnhandledSource("Cannot handle {}".format(source))46 raise UnhandledSource("Cannot handle {}".format(source))
5147
52 if depth:48 if os.path.exists(dest):
53 Repo.clone_from(source, dest, branch=branch, depth=depth)49 cmd = ['git', '-C', dest, 'pull', source, branch]
54 else:50 else:
55 Repo.clone_from(source, dest, branch=branch)51 cmd = ['git', 'clone', source, dest, '--branch', branch]
52 if depth:
53 cmd.extend(['--depth', depth])
54 check_call(cmd)
5655
57 def install(self, source, branch="master", dest=None, depth=None):56 def install(self, source, branch="master", dest=None, depth=None):
58 url_parts = self.parse_url(source)57 url_parts = self.parse_url(source)
@@ -62,11 +61,9 @@
62 else:61 else:
63 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",62 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
64 branch_name)63 branch_name)
65 if not os.path.exists(dest_dir):
66 mkdir(dest_dir, perms=0o755)
67 try:64 try:
68 self.clone(source, dest_dir, branch, depth)65 self.clone(source, dest_dir, branch, depth)
69 except GitCommandError as e:66 except CalledProcessError as e:
70 raise UnhandledSource(e)67 raise UnhandledSource(e)
71 except OSError as e:68 except OSError as e:
72 raise UnhandledSource(e.strerror)69 raise UnhandledSource(e.strerror)
7370
=== modified file 'tests/019-basic-trusty-mitaka' (properties changed: -x to +x)
=== modified file 'tests/020-basic-wily-liberty' (properties changed: -x to +x)
=== modified file 'tests/basic_deployment.py'
--- tests/basic_deployment.py 2015-11-11 14:57:38 +0000
+++ tests/basic_deployment.py 2016-01-22 14:18:52 +0000
@@ -267,7 +267,7 @@
267 'tenantId': u.not_null,267 'tenantId': u.not_null,
268 'id': u.not_null,268 'id': u.not_null,
269 'email': 'juju@localhost'},269 'email': 'juju@localhost'},
270 {'name': 'quantum',270 {'name': 'neutron',
271 'enabled': True,271 'enabled': True,
272 'tenantId': u.not_null,272 'tenantId': u.not_null,
273 'id': u.not_null,273 'id': u.not_null,
274274
=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2015-11-11 19:54:58 +0000
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-22 14:18:52 +0000
@@ -125,7 +125,8 @@
125125
126 # Charms which can not use openstack-origin, ie. many subordinates126 # Charms which can not use openstack-origin, ie. many subordinates
127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',127 no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller']128 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
129 'cinder-backup']
129130
130 if self.openstack:131 if self.openstack:
131 for svc in services:132 for svc in services:
@@ -225,7 +226,8 @@
225 self.precise_havana, self.precise_icehouse,226 self.precise_havana, self.precise_icehouse,
226 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,227 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
227 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,228 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
228 self.wily_liberty) = range(12)229 self.wily_liberty, self.trusty_mitaka,
230 self.xenial_mitaka) = range(14)
229231
230 releases = {232 releases = {
231 ('precise', None): self.precise_essex,233 ('precise', None): self.precise_essex,
@@ -237,9 +239,11 @@
237 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,239 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
238 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,240 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
239 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,241 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
242 ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
240 ('utopic', None): self.utopic_juno,243 ('utopic', None): self.utopic_juno,
241 ('vivid', None): self.vivid_kilo,244 ('vivid', None): self.vivid_kilo,
242 ('wily', None): self.wily_liberty}245 ('wily', None): self.wily_liberty,
246 ('xenial', None): self.xenial_mitaka}
243 return releases[(self.series, self.openstack)]247 return releases[(self.series, self.openstack)]
244248
245 def _get_openstack_release_string(self):249 def _get_openstack_release_string(self):
@@ -256,6 +260,7 @@
256 ('utopic', 'juno'),260 ('utopic', 'juno'),
257 ('vivid', 'kilo'),261 ('vivid', 'kilo'),
258 ('wily', 'liberty'),262 ('wily', 'liberty'),
263 ('xenial', 'mitaka'),
259 ])264 ])
260 if self.openstack:265 if self.openstack:
261 os_origin = self.openstack.split(':')[1]266 os_origin = self.openstack.split(':')[1]

Subscribers

People subscribed via source and target branches