Merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 into lp:~openstack-charmers/charms/trusty/quantum-gateway/next
- Trusty Tahr (14.04)
- lp74646
- Merge into next
Status: | Superseded |
---|---|
Proposed branch: | lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 |
Merge into: | lp:~openstack-charmers/charms/trusty/quantum-gateway/next |
Diff against target: |
3408 lines (+992/-561) 29 files modified
config.yaml (+6/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7) hooks/charmhelpers/contrib/network/ip.py (+83/-51) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1) hooks/charmhelpers/contrib/openstack/context.py (+319/-226) hooks/charmhelpers/contrib/openstack/ip.py (+41/-27) hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4) hooks/charmhelpers/contrib/openstack/templating.py (+5/-5) hooks/charmhelpers/contrib/openstack/utils.py (+146/-13) hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102) hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4) hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2) hooks/charmhelpers/core/fstab.py (+10/-8) hooks/charmhelpers/core/hookenv.py (+27/-11) hooks/charmhelpers/core/host.py (+73/-20) hooks/charmhelpers/core/services/__init__.py (+2/-2) hooks/charmhelpers/core/services/helpers.py (+9/-5) hooks/charmhelpers/core/templating.py (+2/-1) hooks/charmhelpers/fetch/__init__.py (+18/-12) hooks/charmhelpers/fetch/archiveurl.py (+53/-16) hooks/charmhelpers/fetch/bzrurl.py (+5/-1) hooks/quantum_contexts.py (+3/-1) hooks/quantum_hooks.py (+5/-0) hooks/quantum_utils.py (+1/-1) templates/icehouse/neutron.conf (+1/-0) unit_tests/test_quantum_contexts.py (+40/-39) unit_tests/test_quantum_hooks.py (+5/-1) |
To merge this branch: | bzr merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Edward Hope-Morley | Needs Fixing | ||
Xiang Hui | Pending | ||
Review via email: mp+242612@code.launchpad.net |
This proposal has been superseded by a proposal from 2014-12-08.
Commit message
Description of the change
This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
juju set nova-compute phy-nic-mtu=1546
juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-
juju set neutron-api network-
Limitation:
a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_
b, for gre and vxlan, this step is optional.
c, after setting network-
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
juju set neutron-gateway instance-mtu=1500
Limitation:
a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://
b, doesn't support MTU per network
NOTE: maybe we can't test this feature in bastion
uosci-testing-bot (uosci-testing-bot) wrote : | # |
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_lint_check #1214 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_amulet_test #518 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : | # |
So from what I understand, when deploying on metal and/or lxc containers, we need to set mtu in two places. First we need to set mtu on the physical inteface and maas bridge br0 (if maas deployed) used by OVS. Second, we need to set mtu for each veth attached to br0 used by lxc containers into which units are deployed. Also, the mtu needs to be set persistently so that when a node is rebooted it does not return to the default 1500 value.
I don't think the charm is the place to do this since it may not be aware of all these interfaces and setting an lxc default for all containers seems a bit too intrusuive. A MaaS preseed that performs these actions seems a better fit. Thoughts?
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_lint_check #1311 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_unit_test #1145 quantum-
UNIT OK: passed
UNIT Results (max last 5 lines):
hooks/
hooks/
TOTAL 440 8 98%
Ran 83 tests in 3.396s
OK
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_amulet_test #577 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_lint_check #1313 quantum-
LINT OK: passed
LINT Results (max last 5 lines):
I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
I: config.yaml: option ext-port has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option instance-mtu has no default value
I: config.yaml: option external-network-id has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_unit_test #1147 quantum-
UNIT OK: passed
UNIT Results (max last 5 lines):
hooks/
hooks/
TOTAL 440 8 98%
Ran 83 tests in 3.532s
OK
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_amulet_test #579 quantum-
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
- 82. By Hua Zhang
-
enable network-device-mtu
- 83. By Hua Zhang
-
fix unit test
- 84. By Hua Zhang
-
enable persistence
- 85. By Hua Zhang
-
sync charm-helpers
- 86. By Hua Zhang
-
sync charm-helpers
- 87. By Hua Zhang
-
sync charm-helpers to inclue contrib.python to fix unit test error
- 88. By Hua Zhang
-
fix KeyError: network-device-mtu
- 89. By Hua Zhang
-
fix hanging indent
Unmerged revisions
Preview Diff
1 | === modified file 'config.yaml' | |||
2 | --- config.yaml 2014-11-24 09:34:05 +0000 | |||
3 | +++ config.yaml 2014-12-05 07:18:23 +0000 | |||
4 | @@ -115,3 +115,9 @@ | |||
5 | 115 | . | 115 | . |
6 | 116 | This network will be used for tenant network traffic in overlay | 116 | This network will be used for tenant network traffic in overlay |
7 | 117 | networks. | 117 | networks. |
8 | 118 | phy-nic-mtu: | ||
9 | 119 | type: int | ||
10 | 120 | default: 1500 | ||
11 | 121 | description: | | ||
12 | 122 | To improve network performance of VM, sometimes we should keep VM MTU as 1500 | ||
13 | 123 | and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE) | ||
14 | 118 | 124 | ||
15 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' | |||
16 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-07 21:03:47 +0000 | |||
17 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-05 07:18:23 +0000 | |||
18 | @@ -13,9 +13,10 @@ | |||
19 | 13 | 13 | ||
20 | 14 | import subprocess | 14 | import subprocess |
21 | 15 | import os | 15 | import os |
22 | 16 | |||
23 | 17 | from socket import gethostname as get_unit_hostname | 16 | from socket import gethostname as get_unit_hostname |
24 | 18 | 17 | ||
25 | 18 | import six | ||
26 | 19 | |||
27 | 19 | from charmhelpers.core.hookenv import ( | 20 | from charmhelpers.core.hookenv import ( |
28 | 20 | log, | 21 | log, |
29 | 21 | relation_ids, | 22 | relation_ids, |
30 | @@ -77,7 +78,7 @@ | |||
31 | 77 | "show", resource | 78 | "show", resource |
32 | 78 | ] | 79 | ] |
33 | 79 | try: | 80 | try: |
35 | 80 | status = subprocess.check_output(cmd) | 81 | status = subprocess.check_output(cmd).decode('UTF-8') |
36 | 81 | except subprocess.CalledProcessError: | 82 | except subprocess.CalledProcessError: |
37 | 82 | return False | 83 | return False |
38 | 83 | else: | 84 | else: |
39 | @@ -150,34 +151,42 @@ | |||
40 | 150 | return False | 151 | return False |
41 | 151 | 152 | ||
42 | 152 | 153 | ||
44 | 153 | def determine_api_port(public_port): | 154 | def determine_api_port(public_port, singlenode_mode=False): |
45 | 154 | ''' | 155 | ''' |
46 | 155 | Determine correct API server listening port based on | 156 | Determine correct API server listening port based on |
47 | 156 | existence of HTTPS reverse proxy and/or haproxy. | 157 | existence of HTTPS reverse proxy and/or haproxy. |
48 | 157 | 158 | ||
49 | 158 | public_port: int: standard public port for given service | 159 | public_port: int: standard public port for given service |
50 | 159 | 160 | ||
51 | 161 | singlenode_mode: boolean: Shuffle ports when only a single unit is present | ||
52 | 162 | |||
53 | 160 | returns: int: the correct listening port for the API service | 163 | returns: int: the correct listening port for the API service |
54 | 161 | ''' | 164 | ''' |
55 | 162 | i = 0 | 165 | i = 0 |
57 | 163 | if len(peer_units()) > 0 or is_clustered(): | 166 | if singlenode_mode: |
58 | 167 | i += 1 | ||
59 | 168 | elif len(peer_units()) > 0 or is_clustered(): | ||
60 | 164 | i += 1 | 169 | i += 1 |
61 | 165 | if https(): | 170 | if https(): |
62 | 166 | i += 1 | 171 | i += 1 |
63 | 167 | return public_port - (i * 10) | 172 | return public_port - (i * 10) |
64 | 168 | 173 | ||
65 | 169 | 174 | ||
67 | 170 | def determine_apache_port(public_port): | 175 | def determine_apache_port(public_port, singlenode_mode=False): |
68 | 171 | ''' | 176 | ''' |
69 | 172 | Description: Determine correct apache listening port based on public IP + | 177 | Description: Determine correct apache listening port based on public IP + |
70 | 173 | state of the cluster. | 178 | state of the cluster. |
71 | 174 | 179 | ||
72 | 175 | public_port: int: standard public port for given service | 180 | public_port: int: standard public port for given service |
73 | 176 | 181 | ||
74 | 182 | singlenode_mode: boolean: Shuffle ports when only a single unit is present | ||
75 | 183 | |||
76 | 177 | returns: int: the correct listening port for the HAProxy service | 184 | returns: int: the correct listening port for the HAProxy service |
77 | 178 | ''' | 185 | ''' |
78 | 179 | i = 0 | 186 | i = 0 |
80 | 180 | if len(peer_units()) > 0 or is_clustered(): | 187 | if singlenode_mode: |
81 | 188 | i += 1 | ||
82 | 189 | elif len(peer_units()) > 0 or is_clustered(): | ||
83 | 181 | i += 1 | 190 | i += 1 |
84 | 182 | return public_port - (i * 10) | 191 | return public_port - (i * 10) |
85 | 183 | 192 | ||
86 | @@ -197,7 +206,7 @@ | |||
87 | 197 | for setting in settings: | 206 | for setting in settings: |
88 | 198 | conf[setting] = config_get(setting) | 207 | conf[setting] = config_get(setting) |
89 | 199 | missing = [] | 208 | missing = [] |
91 | 200 | [missing.append(s) for s, v in conf.iteritems() if v is None] | 209 | [missing.append(s) for s, v in six.iteritems(conf) if v is None] |
92 | 201 | if missing: | 210 | if missing: |
93 | 202 | log('Insufficient config data to configure hacluster.', level=ERROR) | 211 | log('Insufficient config data to configure hacluster.', level=ERROR) |
94 | 203 | raise HAIncompleteConfig | 212 | raise HAIncompleteConfig |
95 | 204 | 213 | ||
96 | === modified file 'hooks/charmhelpers/contrib/network/ip.py' | |||
97 | --- hooks/charmhelpers/contrib/network/ip.py 2014-10-16 17:42:14 +0000 | |||
98 | +++ hooks/charmhelpers/contrib/network/ip.py 2014-12-05 07:18:23 +0000 | |||
99 | @@ -1,16 +1,20 @@ | |||
100 | 1 | import glob | 1 | import glob |
101 | 2 | import re | 2 | import re |
102 | 3 | import subprocess | 3 | import subprocess |
103 | 4 | import sys | ||
104 | 5 | 4 | ||
105 | 6 | from functools import partial | 5 | from functools import partial |
106 | 7 | 6 | ||
107 | 8 | from charmhelpers.core.hookenv import unit_get | 7 | from charmhelpers.core.hookenv import unit_get |
108 | 9 | from charmhelpers.fetch import apt_install | 8 | from charmhelpers.fetch import apt_install |
109 | 10 | from charmhelpers.core.hookenv import ( | 9 | from charmhelpers.core.hookenv import ( |
113 | 11 | WARNING, | 10 | config, |
114 | 12 | ERROR, | 11 | log, |
115 | 13 | log | 12 | INFO |
116 | 13 | ) | ||
117 | 14 | from charmhelpers.core.host import ( | ||
118 | 15 | list_nics, | ||
119 | 16 | get_nic_mtu, | ||
120 | 17 | set_nic_mtu | ||
121 | 14 | ) | 18 | ) |
122 | 15 | 19 | ||
123 | 16 | try: | 20 | try: |
124 | @@ -34,31 +38,28 @@ | |||
125 | 34 | network) | 38 | network) |
126 | 35 | 39 | ||
127 | 36 | 40 | ||
128 | 41 | def no_ip_found_error_out(network): | ||
129 | 42 | errmsg = ("No IP address found in network: %s" % network) | ||
130 | 43 | raise ValueError(errmsg) | ||
131 | 44 | |||
132 | 45 | |||
133 | 37 | def get_address_in_network(network, fallback=None, fatal=False): | 46 | def get_address_in_network(network, fallback=None, fatal=False): |
136 | 38 | """ | 47 | """Get an IPv4 or IPv6 address within the network from the host. |
135 | 39 | Get an IPv4 or IPv6 address within the network from the host. | ||
137 | 40 | 48 | ||
138 | 41 | :param network (str): CIDR presentation format. For example, | 49 | :param network (str): CIDR presentation format. For example, |
139 | 42 | '192.168.1.0/24'. | 50 | '192.168.1.0/24'. |
140 | 43 | :param fallback (str): If no address is found, return fallback. | 51 | :param fallback (str): If no address is found, return fallback. |
141 | 44 | :param fatal (boolean): If no address is found, fallback is not | 52 | :param fatal (boolean): If no address is found, fallback is not |
142 | 45 | set and fatal is True then exit(1). | 53 | set and fatal is True then exit(1). |
143 | 46 | |||
144 | 47 | """ | 54 | """ |
145 | 48 | |||
146 | 49 | def not_found_error_out(): | ||
147 | 50 | log("No IP address found in network: %s" % network, | ||
148 | 51 | level=ERROR) | ||
149 | 52 | sys.exit(1) | ||
150 | 53 | |||
151 | 54 | if network is None: | 55 | if network is None: |
152 | 55 | if fallback is not None: | 56 | if fallback is not None: |
153 | 56 | return fallback | 57 | return fallback |
154 | 58 | |||
155 | 59 | if fatal: | ||
156 | 60 | no_ip_found_error_out(network) | ||
157 | 57 | else: | 61 | else: |
162 | 58 | if fatal: | 62 | return None |
159 | 59 | not_found_error_out() | ||
160 | 60 | else: | ||
161 | 61 | return None | ||
163 | 62 | 63 | ||
164 | 63 | _validate_cidr(network) | 64 | _validate_cidr(network) |
165 | 64 | network = netaddr.IPNetwork(network) | 65 | network = netaddr.IPNetwork(network) |
166 | @@ -70,6 +71,7 @@ | |||
167 | 70 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) | 71 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
168 | 71 | if cidr in network: | 72 | if cidr in network: |
169 | 72 | return str(cidr.ip) | 73 | return str(cidr.ip) |
170 | 74 | |||
171 | 73 | if network.version == 6 and netifaces.AF_INET6 in addresses: | 75 | if network.version == 6 and netifaces.AF_INET6 in addresses: |
172 | 74 | for addr in addresses[netifaces.AF_INET6]: | 76 | for addr in addresses[netifaces.AF_INET6]: |
173 | 75 | if not addr['addr'].startswith('fe80'): | 77 | if not addr['addr'].startswith('fe80'): |
174 | @@ -82,20 +84,20 @@ | |||
175 | 82 | return fallback | 84 | return fallback |
176 | 83 | 85 | ||
177 | 84 | if fatal: | 86 | if fatal: |
179 | 85 | not_found_error_out() | 87 | no_ip_found_error_out(network) |
180 | 86 | 88 | ||
181 | 87 | return None | 89 | return None |
182 | 88 | 90 | ||
183 | 89 | 91 | ||
184 | 90 | def is_ipv6(address): | 92 | def is_ipv6(address): |
186 | 91 | '''Determine whether provided address is IPv6 or not''' | 93 | """Determine whether provided address is IPv6 or not.""" |
187 | 92 | try: | 94 | try: |
188 | 93 | address = netaddr.IPAddress(address) | 95 | address = netaddr.IPAddress(address) |
189 | 94 | except netaddr.AddrFormatError: | 96 | except netaddr.AddrFormatError: |
190 | 95 | # probably a hostname - so not an address at all! | 97 | # probably a hostname - so not an address at all! |
191 | 96 | return False | 98 | return False |
194 | 97 | else: | 99 | |
195 | 98 | return address.version == 6 | 100 | return address.version == 6 |
196 | 99 | 101 | ||
197 | 100 | 102 | ||
198 | 101 | def is_address_in_network(network, address): | 103 | def is_address_in_network(network, address): |
199 | @@ -113,11 +115,13 @@ | |||
200 | 113 | except (netaddr.core.AddrFormatError, ValueError): | 115 | except (netaddr.core.AddrFormatError, ValueError): |
201 | 114 | raise ValueError("Network (%s) is not in CIDR presentation format" % | 116 | raise ValueError("Network (%s) is not in CIDR presentation format" % |
202 | 115 | network) | 117 | network) |
203 | 118 | |||
204 | 116 | try: | 119 | try: |
205 | 117 | address = netaddr.IPAddress(address) | 120 | address = netaddr.IPAddress(address) |
206 | 118 | except (netaddr.core.AddrFormatError, ValueError): | 121 | except (netaddr.core.AddrFormatError, ValueError): |
207 | 119 | raise ValueError("Address (%s) is not in correct presentation format" % | 122 | raise ValueError("Address (%s) is not in correct presentation format" % |
208 | 120 | address) | 123 | address) |
209 | 124 | |||
210 | 121 | if address in network: | 125 | if address in network: |
211 | 122 | return True | 126 | return True |
212 | 123 | else: | 127 | else: |
213 | @@ -147,6 +151,7 @@ | |||
214 | 147 | return iface | 151 | return iface |
215 | 148 | else: | 152 | else: |
216 | 149 | return addresses[netifaces.AF_INET][0][key] | 153 | return addresses[netifaces.AF_INET][0][key] |
217 | 154 | |||
218 | 150 | if address.version == 6 and netifaces.AF_INET6 in addresses: | 155 | if address.version == 6 and netifaces.AF_INET6 in addresses: |
219 | 151 | for addr in addresses[netifaces.AF_INET6]: | 156 | for addr in addresses[netifaces.AF_INET6]: |
220 | 152 | if not addr['addr'].startswith('fe80'): | 157 | if not addr['addr'].startswith('fe80'): |
221 | @@ -160,41 +165,42 @@ | |||
222 | 160 | return str(cidr).split('/')[1] | 165 | return str(cidr).split('/')[1] |
223 | 161 | else: | 166 | else: |
224 | 162 | return addr[key] | 167 | return addr[key] |
225 | 168 | |||
226 | 163 | return None | 169 | return None |
227 | 164 | 170 | ||
228 | 165 | 171 | ||
229 | 166 | get_iface_for_address = partial(_get_for_address, key='iface') | 172 | get_iface_for_address = partial(_get_for_address, key='iface') |
230 | 167 | 173 | ||
231 | 174 | |||
232 | 168 | get_netmask_for_address = partial(_get_for_address, key='netmask') | 175 | get_netmask_for_address = partial(_get_for_address, key='netmask') |
233 | 169 | 176 | ||
234 | 170 | 177 | ||
235 | 171 | def format_ipv6_addr(address): | 178 | def format_ipv6_addr(address): |
238 | 172 | """ | 179 | """If address is IPv6, wrap it in '[]' otherwise return None. |
239 | 173 | IPv6 needs to be wrapped with [] in url link to parse correctly. | 180 | |
240 | 181 | This is required by most configuration files when specifying IPv6 | ||
241 | 182 | addresses. | ||
242 | 174 | """ | 183 | """ |
243 | 175 | if is_ipv6(address): | 184 | if is_ipv6(address): |
248 | 176 | address = "[%s]" % address | 185 | return "[%s]" % address |
245 | 177 | else: | ||
246 | 178 | log("Not a valid ipv6 address: %s" % address, level=WARNING) | ||
247 | 179 | address = None | ||
249 | 180 | 186 | ||
251 | 181 | return address | 187 | return None |
252 | 182 | 188 | ||
253 | 183 | 189 | ||
254 | 184 | def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, | 190 | def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, |
255 | 185 | fatal=True, exc_list=None): | 191 | fatal=True, exc_list=None): |
259 | 186 | """ | 192 | """Return the assigned IP address for a given interface, if any.""" |
257 | 187 | Return the assigned IP address for a given interface, if any, or []. | ||
258 | 188 | """ | ||
260 | 189 | # Extract nic if passed /dev/ethX | 193 | # Extract nic if passed /dev/ethX |
261 | 190 | if '/' in iface: | 194 | if '/' in iface: |
262 | 191 | iface = iface.split('/')[-1] | 195 | iface = iface.split('/')[-1] |
263 | 196 | |||
264 | 192 | if not exc_list: | 197 | if not exc_list: |
265 | 193 | exc_list = [] | 198 | exc_list = [] |
266 | 199 | |||
267 | 194 | try: | 200 | try: |
268 | 195 | inet_num = getattr(netifaces, inet_type) | 201 | inet_num = getattr(netifaces, inet_type) |
269 | 196 | except AttributeError: | 202 | except AttributeError: |
271 | 197 | raise Exception('Unknown inet type ' + str(inet_type)) | 203 | raise Exception("Unknown inet type '%s'" % str(inet_type)) |
272 | 198 | 204 | ||
273 | 199 | interfaces = netifaces.interfaces() | 205 | interfaces = netifaces.interfaces() |
274 | 200 | if inc_aliases: | 206 | if inc_aliases: |
275 | @@ -202,15 +208,18 @@ | |||
276 | 202 | for _iface in interfaces: | 208 | for _iface in interfaces: |
277 | 203 | if iface == _iface or _iface.split(':')[0] == iface: | 209 | if iface == _iface or _iface.split(':')[0] == iface: |
278 | 204 | ifaces.append(_iface) | 210 | ifaces.append(_iface) |
279 | 211 | |||
280 | 205 | if fatal and not ifaces: | 212 | if fatal and not ifaces: |
281 | 206 | raise Exception("Invalid interface '%s'" % iface) | 213 | raise Exception("Invalid interface '%s'" % iface) |
282 | 214 | |||
283 | 207 | ifaces.sort() | 215 | ifaces.sort() |
284 | 208 | else: | 216 | else: |
285 | 209 | if iface not in interfaces: | 217 | if iface not in interfaces: |
286 | 210 | if fatal: | 218 | if fatal: |
288 | 211 | raise Exception("%s not found " % (iface)) | 219 | raise Exception("Interface '%s' not found " % (iface)) |
289 | 212 | else: | 220 | else: |
290 | 213 | return [] | 221 | return [] |
291 | 222 | |||
292 | 214 | else: | 223 | else: |
293 | 215 | ifaces = [iface] | 224 | ifaces = [iface] |
294 | 216 | 225 | ||
295 | @@ -221,10 +230,13 @@ | |||
296 | 221 | for entry in net_info[inet_num]: | 230 | for entry in net_info[inet_num]: |
297 | 222 | if 'addr' in entry and entry['addr'] not in exc_list: | 231 | if 'addr' in entry and entry['addr'] not in exc_list: |
298 | 223 | addresses.append(entry['addr']) | 232 | addresses.append(entry['addr']) |
299 | 233 | |||
300 | 224 | if fatal and not addresses: | 234 | if fatal and not addresses: |
301 | 225 | raise Exception("Interface '%s' doesn't have any %s addresses." % | 235 | raise Exception("Interface '%s' doesn't have any %s addresses." % |
302 | 226 | (iface, inet_type)) | 236 | (iface, inet_type)) |
304 | 227 | return addresses | 237 | |
305 | 238 | return sorted(addresses) | ||
306 | 239 | |||
307 | 228 | 240 | ||
308 | 229 | get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') | 241 | get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') |
309 | 230 | 242 | ||
310 | @@ -241,6 +253,7 @@ | |||
311 | 241 | raw = re.match(ll_key, _addr) | 253 | raw = re.match(ll_key, _addr) |
312 | 242 | if raw: | 254 | if raw: |
313 | 243 | _addr = raw.group(1) | 255 | _addr = raw.group(1) |
314 | 256 | |||
315 | 244 | if _addr == addr: | 257 | if _addr == addr: |
316 | 245 | log("Address '%s' is configured on iface '%s'" % | 258 | log("Address '%s' is configured on iface '%s'" % |
317 | 246 | (addr, iface)) | 259 | (addr, iface)) |
318 | @@ -251,8 +264,9 @@ | |||
319 | 251 | 264 | ||
320 | 252 | 265 | ||
321 | 253 | def sniff_iface(f): | 266 | def sniff_iface(f): |
324 | 254 | """If no iface provided, inject net iface inferred from unit private | 267 | """Ensure decorated function is called with a value for iface. |
325 | 255 | address. | 268 | |
326 | 269 | If no iface provided, inject net iface inferred from unit private address. | ||
327 | 256 | """ | 270 | """ |
328 | 257 | def iface_sniffer(*args, **kwargs): | 271 | def iface_sniffer(*args, **kwargs): |
329 | 258 | if not kwargs.get('iface', None): | 272 | if not kwargs.get('iface', None): |
330 | @@ -295,7 +309,7 @@ | |||
331 | 295 | if global_addrs: | 309 | if global_addrs: |
332 | 296 | # Make sure any found global addresses are not temporary | 310 | # Make sure any found global addresses are not temporary |
333 | 297 | cmd = ['ip', 'addr', 'show', iface] | 311 | cmd = ['ip', 'addr', 'show', iface] |
335 | 298 | out = subprocess.check_output(cmd) | 312 | out = subprocess.check_output(cmd).decode('UTF-8') |
336 | 299 | if dynamic_only: | 313 | if dynamic_only: |
337 | 300 | key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*") | 314 | key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*") |
338 | 301 | else: | 315 | else: |
339 | @@ -317,33 +331,51 @@ | |||
340 | 317 | return addrs | 331 | return addrs |
341 | 318 | 332 | ||
342 | 319 | if fatal: | 333 | if fatal: |
344 | 320 | raise Exception("Interface '%s' doesn't have a scope global " | 334 | raise Exception("Interface '%s' does not have a scope global " |
345 | 321 | "non-temporary ipv6 address." % iface) | 335 | "non-temporary ipv6 address." % iface) |
346 | 322 | 336 | ||
347 | 323 | return [] | 337 | return [] |
348 | 324 | 338 | ||
349 | 325 | 339 | ||
350 | 326 | def get_bridges(vnic_dir='/sys/devices/virtual/net'): | 340 | def get_bridges(vnic_dir='/sys/devices/virtual/net'): |
356 | 327 | """ | 341 | """Return a list of bridges on the system.""" |
357 | 328 | Return a list of bridges on the system or [] | 342 | b_regex = "%s/*/bridge" % vnic_dir |
358 | 329 | """ | 343 | return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] |
354 | 330 | b_rgex = vnic_dir + '/*/bridge' | ||
355 | 331 | return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)] | ||
359 | 332 | 344 | ||
360 | 333 | 345 | ||
361 | 334 | def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): | 346 | def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): |
367 | 335 | """ | 347 | """Return a list of nics comprising a given bridge on the system.""" |
368 | 336 | Return a list of nics comprising a given bridge on the system or [] | 348 | brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) |
369 | 337 | """ | 349 | return [x.split('/')[-1] for x in glob.glob(brif_regex)] |
365 | 338 | brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge) | ||
366 | 339 | return [x.split('/')[-1] for x in glob.glob(brif_rgex)] | ||
370 | 340 | 350 | ||
371 | 341 | 351 | ||
372 | 342 | def is_bridge_member(nic): | 352 | def is_bridge_member(nic): |
376 | 343 | """ | 353 | """Check if a given nic is a member of a bridge.""" |
374 | 344 | Check if a given nic is a member of a bridge | ||
375 | 345 | """ | ||
377 | 346 | for bridge in get_bridges(): | 354 | for bridge in get_bridges(): |
378 | 347 | if nic in get_bridge_nics(bridge): | 355 | if nic in get_bridge_nics(bridge): |
379 | 348 | return True | 356 | return True |
380 | 357 | |||
381 | 349 | return False | 358 | return False |
382 | 359 | |||
383 | 360 | |||
384 | 361 | def configure_phy_nic_mtu(mng_ip=None): | ||
385 | 362 | """Configure mtu for physical nic.""" | ||
386 | 363 | phy_nic_mtu = config('phy-nic-mtu') | ||
387 | 364 | if phy_nic_mtu >= 1500: | ||
388 | 365 | phy_nic = None | ||
389 | 366 | if mng_ip is None: | ||
390 | 367 | mng_ip = unit_get('private-address') | ||
391 | 368 | for nic in list_nics(['eth', 'bond', 'br']): | ||
392 | 369 | if mng_ip in get_ipv4_addr(nic, fatal=False): | ||
393 | 370 | phy_nic = nic | ||
394 | 371 | # need to find the associated phy nic for bridge | ||
395 | 372 | if nic.startswith('br'): | ||
396 | 373 | for brnic in get_bridge_nics(nic): | ||
397 | 374 | if brnic.startswith('eth') or brnic.startswith('bond'): | ||
398 | 375 | phy_nic = brnic | ||
399 | 376 | break | ||
400 | 377 | break | ||
401 | 378 | if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic): | ||
402 | 379 | set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True) | ||
403 | 380 | log('set mtu={} for phy_nic={}' | ||
404 | 381 | .format(phy_nic_mtu, phy_nic), level=INFO) | ||
405 | 350 | 382 | ||
406 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
407 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-07 21:03:47 +0000 | |||
408 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-05 07:18:23 +0000 | |||
409 | @@ -1,3 +1,4 @@ | |||
410 | 1 | import six | ||
411 | 1 | from charmhelpers.contrib.amulet.deployment import ( | 2 | from charmhelpers.contrib.amulet.deployment import ( |
412 | 2 | AmuletDeployment | 3 | AmuletDeployment |
413 | 3 | ) | 4 | ) |
414 | @@ -69,7 +70,7 @@ | |||
415 | 69 | 70 | ||
416 | 70 | def _configure_services(self, configs): | 71 | def _configure_services(self, configs): |
417 | 71 | """Configure all of the services.""" | 72 | """Configure all of the services.""" |
419 | 72 | for service, config in configs.iteritems(): | 73 | for service, config in six.iteritems(configs): |
420 | 73 | self.d.configure(service, config) | 74 | self.d.configure(service, config) |
421 | 74 | 75 | ||
422 | 75 | def _get_openstack_release(self): | 76 | def _get_openstack_release(self): |
423 | 76 | 77 | ||
424 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' | |||
425 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-25 15:37:05 +0000 | |||
426 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-05 07:18:23 +0000 | |||
427 | @@ -7,6 +7,8 @@ | |||
428 | 7 | import keystoneclient.v2_0 as keystone_client | 7 | import keystoneclient.v2_0 as keystone_client |
429 | 8 | import novaclient.v1_1.client as nova_client | 8 | import novaclient.v1_1.client as nova_client |
430 | 9 | 9 | ||
431 | 10 | import six | ||
432 | 11 | |||
433 | 10 | from charmhelpers.contrib.amulet.utils import ( | 12 | from charmhelpers.contrib.amulet.utils import ( |
434 | 11 | AmuletUtils | 13 | AmuletUtils |
435 | 12 | ) | 14 | ) |
436 | @@ -60,7 +62,7 @@ | |||
437 | 60 | expected service catalog endpoints. | 62 | expected service catalog endpoints. |
438 | 61 | """ | 63 | """ |
439 | 62 | self.log.debug('actual: {}'.format(repr(actual))) | 64 | self.log.debug('actual: {}'.format(repr(actual))) |
441 | 63 | for k, v in expected.iteritems(): | 65 | for k, v in six.iteritems(expected): |
442 | 64 | if k in actual: | 66 | if k in actual: |
443 | 65 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) | 67 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) |
444 | 66 | if ret: | 68 | if ret: |
445 | 67 | 69 | ||
446 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
447 | --- hooks/charmhelpers/contrib/openstack/context.py 2014-10-07 21:03:47 +0000 | |||
448 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-05 07:18:23 +0000 | |||
449 | @@ -1,20 +1,18 @@ | |||
450 | 1 | import json | 1 | import json |
451 | 2 | import os | 2 | import os |
452 | 3 | import time | 3 | import time |
453 | 4 | |||
454 | 5 | from base64 import b64decode | 4 | from base64 import b64decode |
455 | 5 | from subprocess import check_call | ||
456 | 6 | 6 | ||
460 | 7 | from subprocess import ( | 7 | import six |
458 | 8 | check_call | ||
459 | 9 | ) | ||
461 | 10 | 8 | ||
462 | 11 | from charmhelpers.fetch import ( | 9 | from charmhelpers.fetch import ( |
463 | 12 | apt_install, | 10 | apt_install, |
464 | 13 | filter_installed_packages, | 11 | filter_installed_packages, |
465 | 14 | ) | 12 | ) |
466 | 15 | |||
467 | 16 | from charmhelpers.core.hookenv import ( | 13 | from charmhelpers.core.hookenv import ( |
468 | 17 | config, | 14 | config, |
469 | 15 | is_relation_made, | ||
470 | 18 | local_unit, | 16 | local_unit, |
471 | 19 | log, | 17 | log, |
472 | 20 | relation_get, | 18 | relation_get, |
473 | @@ -23,43 +21,40 @@ | |||
474 | 23 | relation_set, | 21 | relation_set, |
475 | 24 | unit_get, | 22 | unit_get, |
476 | 25 | unit_private_ip, | 23 | unit_private_ip, |
477 | 24 | DEBUG, | ||
478 | 25 | INFO, | ||
479 | 26 | WARNING, | ||
480 | 26 | ERROR, | 27 | ERROR, |
481 | 27 | INFO | ||
482 | 28 | ) | 28 | ) |
483 | 29 | |||
484 | 30 | from charmhelpers.core.host import ( | 29 | from charmhelpers.core.host import ( |
485 | 31 | mkdir, | 30 | mkdir, |
487 | 32 | write_file | 31 | write_file, |
488 | 33 | ) | 32 | ) |
489 | 34 | |||
490 | 35 | from charmhelpers.contrib.hahelpers.cluster import ( | 33 | from charmhelpers.contrib.hahelpers.cluster import ( |
491 | 36 | determine_apache_port, | 34 | determine_apache_port, |
492 | 37 | determine_api_port, | 35 | determine_api_port, |
493 | 38 | https, | 36 | https, |
495 | 39 | is_clustered | 37 | is_clustered, |
496 | 40 | ) | 38 | ) |
497 | 41 | |||
498 | 42 | from charmhelpers.contrib.hahelpers.apache import ( | 39 | from charmhelpers.contrib.hahelpers.apache import ( |
499 | 43 | get_cert, | 40 | get_cert, |
500 | 44 | get_ca_cert, | 41 | get_ca_cert, |
501 | 45 | install_ca_cert, | 42 | install_ca_cert, |
502 | 46 | ) | 43 | ) |
503 | 47 | |||
504 | 48 | from charmhelpers.contrib.openstack.neutron import ( | 44 | from charmhelpers.contrib.openstack.neutron import ( |
505 | 49 | neutron_plugin_attribute, | 45 | neutron_plugin_attribute, |
506 | 50 | ) | 46 | ) |
507 | 51 | |||
508 | 52 | from charmhelpers.contrib.network.ip import ( | 47 | from charmhelpers.contrib.network.ip import ( |
509 | 53 | get_address_in_network, | 48 | get_address_in_network, |
510 | 54 | get_ipv6_addr, | 49 | get_ipv6_addr, |
511 | 55 | get_netmask_for_address, | 50 | get_netmask_for_address, |
512 | 56 | format_ipv6_addr, | 51 | format_ipv6_addr, |
514 | 57 | is_address_in_network | 52 | is_address_in_network, |
515 | 58 | ) | 53 | ) |
516 | 59 | |||
517 | 60 | from charmhelpers.contrib.openstack.utils import get_host_ip | 54 | from charmhelpers.contrib.openstack.utils import get_host_ip |
518 | 61 | 55 | ||
519 | 62 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' | 56 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
520 | 57 | ADDRESS_TYPES = ['admin', 'internal', 'public'] | ||
521 | 63 | 58 | ||
522 | 64 | 59 | ||
523 | 65 | class OSContextError(Exception): | 60 | class OSContextError(Exception): |
524 | @@ -67,7 +62,7 @@ | |||
525 | 67 | 62 | ||
526 | 68 | 63 | ||
527 | 69 | def ensure_packages(packages): | 64 | def ensure_packages(packages): |
529 | 70 | '''Install but do not upgrade required plugin packages''' | 65 | """Install but do not upgrade required plugin packages.""" |
530 | 71 | required = filter_installed_packages(packages) | 66 | required = filter_installed_packages(packages) |
531 | 72 | if required: | 67 | if required: |
532 | 73 | apt_install(required, fatal=True) | 68 | apt_install(required, fatal=True) |
533 | @@ -75,20 +70,27 @@ | |||
534 | 75 | 70 | ||
535 | 76 | def context_complete(ctxt): | 71 | def context_complete(ctxt): |
536 | 77 | _missing = [] | 72 | _missing = [] |
538 | 78 | for k, v in ctxt.iteritems(): | 73 | for k, v in six.iteritems(ctxt): |
539 | 79 | if v is None or v == '': | 74 | if v is None or v == '': |
540 | 80 | _missing.append(k) | 75 | _missing.append(k) |
541 | 76 | |||
542 | 81 | if _missing: | 77 | if _missing: |
544 | 82 | log('Missing required data: %s' % ' '.join(_missing), level='INFO') | 78 | log('Missing required data: %s' % ' '.join(_missing), level=INFO) |
545 | 83 | return False | 79 | return False |
546 | 80 | |||
547 | 84 | return True | 81 | return True |
548 | 85 | 82 | ||
549 | 86 | 83 | ||
550 | 87 | def config_flags_parser(config_flags): | 84 | def config_flags_parser(config_flags): |
551 | 85 | """Parses config flags string into dict. | ||
552 | 86 | |||
553 | 87 | The provided config_flags string may be a list of comma-separated values | ||
554 | 88 | which themselves may be comma-separated list of values. | ||
555 | 89 | """ | ||
556 | 88 | if config_flags.find('==') >= 0: | 90 | if config_flags.find('==') >= 0: |
559 | 89 | log("config_flags is not in expected format (key=value)", | 91 | log("config_flags is not in expected format (key=value)", level=ERROR) |
558 | 90 | level=ERROR) | ||
560 | 91 | raise OSContextError | 92 | raise OSContextError |
561 | 93 | |||
562 | 92 | # strip the following from each value. | 94 | # strip the following from each value. |
563 | 93 | post_strippers = ' ,' | 95 | post_strippers = ' ,' |
564 | 94 | # we strip any leading/trailing '=' or ' ' from the string then | 96 | # we strip any leading/trailing '=' or ' ' from the string then |
565 | @@ -96,7 +98,7 @@ | |||
566 | 96 | split = config_flags.strip(' =').split('=') | 98 | split = config_flags.strip(' =').split('=') |
567 | 97 | limit = len(split) | 99 | limit = len(split) |
568 | 98 | flags = {} | 100 | flags = {} |
570 | 99 | for i in xrange(0, limit - 1): | 101 | for i in range(0, limit - 1): |
571 | 100 | current = split[i] | 102 | current = split[i] |
572 | 101 | next = split[i + 1] | 103 | next = split[i + 1] |
573 | 102 | vindex = next.rfind(',') | 104 | vindex = next.rfind(',') |
574 | @@ -111,17 +113,18 @@ | |||
575 | 111 | # if this not the first entry, expect an embedded key. | 113 | # if this not the first entry, expect an embedded key. |
576 | 112 | index = current.rfind(',') | 114 | index = current.rfind(',') |
577 | 113 | if index < 0: | 115 | if index < 0: |
580 | 114 | log("invalid config value(s) at index %s" % (i), | 116 | log("Invalid config value(s) at index %s" % (i), level=ERROR) |
579 | 115 | level=ERROR) | ||
581 | 116 | raise OSContextError | 117 | raise OSContextError |
582 | 117 | key = current[index + 1:] | 118 | key = current[index + 1:] |
583 | 118 | 119 | ||
584 | 119 | # Add to collection. | 120 | # Add to collection. |
585 | 120 | flags[key.strip(post_strippers)] = value.rstrip(post_strippers) | 121 | flags[key.strip(post_strippers)] = value.rstrip(post_strippers) |
586 | 122 | |||
587 | 121 | return flags | 123 | return flags |
588 | 122 | 124 | ||
589 | 123 | 125 | ||
590 | 124 | class OSContextGenerator(object): | 126 | class OSContextGenerator(object): |
591 | 127 | """Base class for all context generators.""" | ||
592 | 125 | interfaces = [] | 128 | interfaces = [] |
593 | 126 | 129 | ||
594 | 127 | def __call__(self): | 130 | def __call__(self): |
595 | @@ -133,11 +136,11 @@ | |||
596 | 133 | 136 | ||
597 | 134 | def __init__(self, | 137 | def __init__(self, |
598 | 135 | database=None, user=None, relation_prefix=None, ssl_dir=None): | 138 | database=None, user=None, relation_prefix=None, ssl_dir=None): |
604 | 136 | ''' | 139 | """Allows inspecting relation for settings prefixed with |
605 | 137 | Allows inspecting relation for settings prefixed with relation_prefix. | 140 | relation_prefix. This is useful for parsing access for multiple |
606 | 138 | This is useful for parsing access for multiple databases returned via | 141 | databases returned via the shared-db interface (eg, nova_password, |
607 | 139 | the shared-db interface (eg, nova_password, quantum_password) | 142 | quantum_password) |
608 | 140 | ''' | 143 | """ |
609 | 141 | self.relation_prefix = relation_prefix | 144 | self.relation_prefix = relation_prefix |
610 | 142 | self.database = database | 145 | self.database = database |
611 | 143 | self.user = user | 146 | self.user = user |
612 | @@ -147,9 +150,8 @@ | |||
613 | 147 | self.database = self.database or config('database') | 150 | self.database = self.database or config('database') |
614 | 148 | self.user = self.user or config('database-user') | 151 | self.user = self.user or config('database-user') |
615 | 149 | if None in [self.database, self.user]: | 152 | if None in [self.database, self.user]: |
619 | 150 | log('Could not generate shared_db context. ' | 153 | log("Could not generate shared_db context. Missing required charm " |
620 | 151 | 'Missing required charm config options. ' | 154 | "config options. (database name and user)", level=ERROR) |
618 | 152 | '(database name and user)') | ||
621 | 153 | raise OSContextError | 155 | raise OSContextError |
622 | 154 | 156 | ||
623 | 155 | ctxt = {} | 157 | ctxt = {} |
624 | @@ -202,23 +204,24 @@ | |||
625 | 202 | def __call__(self): | 204 | def __call__(self): |
626 | 203 | self.database = self.database or config('database') | 205 | self.database = self.database or config('database') |
627 | 204 | if self.database is None: | 206 | if self.database is None: |
631 | 205 | log('Could not generate postgresql_db context. ' | 207 | log('Could not generate postgresql_db context. Missing required ' |
632 | 206 | 'Missing required charm config options. ' | 208 | 'charm config options. (database name)', level=ERROR) |
630 | 207 | '(database name)') | ||
633 | 208 | raise OSContextError | 209 | raise OSContextError |
634 | 210 | |||
635 | 209 | ctxt = {} | 211 | ctxt = {} |
636 | 210 | |||
637 | 211 | for rid in relation_ids(self.interfaces[0]): | 212 | for rid in relation_ids(self.interfaces[0]): |
638 | 212 | for unit in related_units(rid): | 213 | for unit in related_units(rid): |
646 | 213 | ctxt = { | 214 | rel_host = relation_get('host', rid=rid, unit=unit) |
647 | 214 | 'database_host': relation_get('host', rid=rid, unit=unit), | 215 | rel_user = relation_get('user', rid=rid, unit=unit) |
648 | 215 | 'database': self.database, | 216 | rel_passwd = relation_get('password', rid=rid, unit=unit) |
649 | 216 | 'database_user': relation_get('user', rid=rid, unit=unit), | 217 | ctxt = {'database_host': rel_host, |
650 | 217 | 'database_password': relation_get('password', rid=rid, unit=unit), | 218 | 'database': self.database, |
651 | 218 | 'database_type': 'postgresql', | 219 | 'database_user': rel_user, |
652 | 219 | } | 220 | 'database_password': rel_passwd, |
653 | 221 | 'database_type': 'postgresql'} | ||
654 | 220 | if context_complete(ctxt): | 222 | if context_complete(ctxt): |
655 | 221 | return ctxt | 223 | return ctxt |
656 | 224 | |||
657 | 222 | return {} | 225 | return {} |
658 | 223 | 226 | ||
659 | 224 | 227 | ||
660 | @@ -227,23 +230,29 @@ | |||
661 | 227 | ca_path = os.path.join(ssl_dir, 'db-client.ca') | 230 | ca_path = os.path.join(ssl_dir, 'db-client.ca') |
662 | 228 | with open(ca_path, 'w') as fh: | 231 | with open(ca_path, 'w') as fh: |
663 | 229 | fh.write(b64decode(rdata['ssl_ca'])) | 232 | fh.write(b64decode(rdata['ssl_ca'])) |
664 | 233 | |||
665 | 230 | ctxt['database_ssl_ca'] = ca_path | 234 | ctxt['database_ssl_ca'] = ca_path |
666 | 231 | elif 'ssl_ca' in rdata: | 235 | elif 'ssl_ca' in rdata: |
668 | 232 | log("Charm not setup for ssl support but ssl ca found") | 236 | log("Charm not setup for ssl support but ssl ca found", level=INFO) |
669 | 233 | return ctxt | 237 | return ctxt |
670 | 238 | |||
671 | 234 | if 'ssl_cert' in rdata: | 239 | if 'ssl_cert' in rdata: |
672 | 235 | cert_path = os.path.join( | 240 | cert_path = os.path.join( |
673 | 236 | ssl_dir, 'db-client.cert') | 241 | ssl_dir, 'db-client.cert') |
674 | 237 | if not os.path.exists(cert_path): | 242 | if not os.path.exists(cert_path): |
676 | 238 | log("Waiting 1m for ssl client cert validity") | 243 | log("Waiting 1m for ssl client cert validity", level=INFO) |
677 | 239 | time.sleep(60) | 244 | time.sleep(60) |
678 | 245 | |||
679 | 240 | with open(cert_path, 'w') as fh: | 246 | with open(cert_path, 'w') as fh: |
680 | 241 | fh.write(b64decode(rdata['ssl_cert'])) | 247 | fh.write(b64decode(rdata['ssl_cert'])) |
681 | 248 | |||
682 | 242 | ctxt['database_ssl_cert'] = cert_path | 249 | ctxt['database_ssl_cert'] = cert_path |
683 | 243 | key_path = os.path.join(ssl_dir, 'db-client.key') | 250 | key_path = os.path.join(ssl_dir, 'db-client.key') |
684 | 244 | with open(key_path, 'w') as fh: | 251 | with open(key_path, 'w') as fh: |
685 | 245 | fh.write(b64decode(rdata['ssl_key'])) | 252 | fh.write(b64decode(rdata['ssl_key'])) |
686 | 253 | |||
687 | 246 | ctxt['database_ssl_key'] = key_path | 254 | ctxt['database_ssl_key'] = key_path |
688 | 255 | |||
689 | 247 | return ctxt | 256 | return ctxt |
690 | 248 | 257 | ||
691 | 249 | 258 | ||
692 | @@ -251,9 +260,8 @@ | |||
693 | 251 | interfaces = ['identity-service'] | 260 | interfaces = ['identity-service'] |
694 | 252 | 261 | ||
695 | 253 | def __call__(self): | 262 | def __call__(self): |
697 | 254 | log('Generating template context for identity-service') | 263 | log('Generating template context for identity-service', level=DEBUG) |
698 | 255 | ctxt = {} | 264 | ctxt = {} |
699 | 256 | |||
700 | 257 | for rid in relation_ids('identity-service'): | 265 | for rid in relation_ids('identity-service'): |
701 | 258 | for unit in related_units(rid): | 266 | for unit in related_units(rid): |
702 | 259 | rdata = relation_get(rid=rid, unit=unit) | 267 | rdata = relation_get(rid=rid, unit=unit) |
703 | @@ -261,26 +269,24 @@ | |||
704 | 261 | serv_host = format_ipv6_addr(serv_host) or serv_host | 269 | serv_host = format_ipv6_addr(serv_host) or serv_host |
705 | 262 | auth_host = rdata.get('auth_host') | 270 | auth_host = rdata.get('auth_host') |
706 | 263 | auth_host = format_ipv6_addr(auth_host) or auth_host | 271 | auth_host = format_ipv6_addr(auth_host) or auth_host |
721 | 264 | 272 | svc_protocol = rdata.get('service_protocol') or 'http' | |
722 | 265 | ctxt = { | 273 | auth_protocol = rdata.get('auth_protocol') or 'http' |
723 | 266 | 'service_port': rdata.get('service_port'), | 274 | ctxt = {'service_port': rdata.get('service_port'), |
724 | 267 | 'service_host': serv_host, | 275 | 'service_host': serv_host, |
725 | 268 | 'auth_host': auth_host, | 276 | 'auth_host': auth_host, |
726 | 269 | 'auth_port': rdata.get('auth_port'), | 277 | 'auth_port': rdata.get('auth_port'), |
727 | 270 | 'admin_tenant_name': rdata.get('service_tenant'), | 278 | 'admin_tenant_name': rdata.get('service_tenant'), |
728 | 271 | 'admin_user': rdata.get('service_username'), | 279 | 'admin_user': rdata.get('service_username'), |
729 | 272 | 'admin_password': rdata.get('service_password'), | 280 | 'admin_password': rdata.get('service_password'), |
730 | 273 | 'service_protocol': | 281 | 'service_protocol': svc_protocol, |
731 | 274 | rdata.get('service_protocol') or 'http', | 282 | 'auth_protocol': auth_protocol} |
718 | 275 | 'auth_protocol': | ||
719 | 276 | rdata.get('auth_protocol') or 'http', | ||
720 | 277 | } | ||
732 | 278 | if context_complete(ctxt): | 283 | if context_complete(ctxt): |
733 | 279 | # NOTE(jamespage) this is required for >= icehouse | 284 | # NOTE(jamespage) this is required for >= icehouse |
734 | 280 | # so a missing value just indicates keystone needs | 285 | # so a missing value just indicates keystone needs |
735 | 281 | # upgrading | 286 | # upgrading |
736 | 282 | ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') | 287 | ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') |
737 | 283 | return ctxt | 288 | return ctxt |
738 | 289 | |||
739 | 284 | return {} | 290 | return {} |
740 | 285 | 291 | ||
741 | 286 | 292 | ||
742 | @@ -293,21 +299,23 @@ | |||
743 | 293 | self.interfaces = [rel_name] | 299 | self.interfaces = [rel_name] |
744 | 294 | 300 | ||
745 | 295 | def __call__(self): | 301 | def __call__(self): |
747 | 296 | log('Generating template context for amqp') | 302 | log('Generating template context for amqp', level=DEBUG) |
748 | 297 | conf = config() | 303 | conf = config() |
749 | 298 | user_setting = 'rabbit-user' | ||
750 | 299 | vhost_setting = 'rabbit-vhost' | ||
751 | 300 | if self.relation_prefix: | 304 | if self.relation_prefix: |
754 | 301 | user_setting = self.relation_prefix + '-rabbit-user' | 305 | user_setting = '%s-rabbit-user' % (self.relation_prefix) |
755 | 302 | vhost_setting = self.relation_prefix + '-rabbit-vhost' | 306 | vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix) |
756 | 307 | else: | ||
757 | 308 | user_setting = 'rabbit-user' | ||
758 | 309 | vhost_setting = 'rabbit-vhost' | ||
759 | 303 | 310 | ||
760 | 304 | try: | 311 | try: |
761 | 305 | username = conf[user_setting] | 312 | username = conf[user_setting] |
762 | 306 | vhost = conf[vhost_setting] | 313 | vhost = conf[vhost_setting] |
763 | 307 | except KeyError as e: | 314 | except KeyError as e: |
766 | 308 | log('Could not generate shared_db context. ' | 315 | log('Could not generate shared_db context. Missing required charm ' |
767 | 309 | 'Missing required charm config options: %s.' % e) | 316 | 'config options: %s.' % e, level=ERROR) |
768 | 310 | raise OSContextError | 317 | raise OSContextError |
769 | 318 | |||
770 | 311 | ctxt = {} | 319 | ctxt = {} |
771 | 312 | for rid in relation_ids(self.rel_name): | 320 | for rid in relation_ids(self.rel_name): |
772 | 313 | ha_vip_only = False | 321 | ha_vip_only = False |
773 | @@ -321,6 +329,7 @@ | |||
774 | 321 | host = relation_get('private-address', rid=rid, unit=unit) | 329 | host = relation_get('private-address', rid=rid, unit=unit) |
775 | 322 | host = format_ipv6_addr(host) or host | 330 | host = format_ipv6_addr(host) or host |
776 | 323 | ctxt['rabbitmq_host'] = host | 331 | ctxt['rabbitmq_host'] = host |
777 | 332 | |||
778 | 324 | ctxt.update({ | 333 | ctxt.update({ |
779 | 325 | 'rabbitmq_user': username, | 334 | 'rabbitmq_user': username, |
780 | 326 | 'rabbitmq_password': relation_get('password', rid=rid, | 335 | 'rabbitmq_password': relation_get('password', rid=rid, |
781 | @@ -331,6 +340,7 @@ | |||
782 | 331 | ssl_port = relation_get('ssl_port', rid=rid, unit=unit) | 340 | ssl_port = relation_get('ssl_port', rid=rid, unit=unit) |
783 | 332 | if ssl_port: | 341 | if ssl_port: |
784 | 333 | ctxt['rabbit_ssl_port'] = ssl_port | 342 | ctxt['rabbit_ssl_port'] = ssl_port |
785 | 343 | |||
786 | 334 | ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) | 344 | ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) |
787 | 335 | if ssl_ca: | 345 | if ssl_ca: |
788 | 336 | ctxt['rabbit_ssl_ca'] = ssl_ca | 346 | ctxt['rabbit_ssl_ca'] = ssl_ca |
789 | @@ -344,41 +354,45 @@ | |||
790 | 344 | if context_complete(ctxt): | 354 | if context_complete(ctxt): |
791 | 345 | if 'rabbit_ssl_ca' in ctxt: | 355 | if 'rabbit_ssl_ca' in ctxt: |
792 | 346 | if not self.ssl_dir: | 356 | if not self.ssl_dir: |
795 | 347 | log(("Charm not setup for ssl support " | 357 | log("Charm not setup for ssl support but ssl ca " |
796 | 348 | "but ssl ca found")) | 358 | "found", level=INFO) |
797 | 349 | break | 359 | break |
798 | 360 | |||
799 | 350 | ca_path = os.path.join( | 361 | ca_path = os.path.join( |
800 | 351 | self.ssl_dir, 'rabbit-client-ca.pem') | 362 | self.ssl_dir, 'rabbit-client-ca.pem') |
801 | 352 | with open(ca_path, 'w') as fh: | 363 | with open(ca_path, 'w') as fh: |
802 | 353 | fh.write(b64decode(ctxt['rabbit_ssl_ca'])) | 364 | fh.write(b64decode(ctxt['rabbit_ssl_ca'])) |
803 | 354 | ctxt['rabbit_ssl_ca'] = ca_path | 365 | ctxt['rabbit_ssl_ca'] = ca_path |
804 | 366 | |||
805 | 355 | # Sufficient information found = break out! | 367 | # Sufficient information found = break out! |
806 | 356 | break | 368 | break |
807 | 369 | |||
808 | 357 | # Used for active/active rabbitmq >= grizzly | 370 | # Used for active/active rabbitmq >= grizzly |
811 | 358 | if ('clustered' not in ctxt or ha_vip_only) \ | 371 | if (('clustered' not in ctxt or ha_vip_only) and |
812 | 359 | and len(related_units(rid)) > 1: | 372 | len(related_units(rid)) > 1): |
813 | 360 | rabbitmq_hosts = [] | 373 | rabbitmq_hosts = [] |
814 | 361 | for unit in related_units(rid): | 374 | for unit in related_units(rid): |
815 | 362 | host = relation_get('private-address', rid=rid, unit=unit) | 375 | host = relation_get('private-address', rid=rid, unit=unit) |
816 | 363 | host = format_ipv6_addr(host) or host | 376 | host = format_ipv6_addr(host) or host |
817 | 364 | rabbitmq_hosts.append(host) | 377 | rabbitmq_hosts.append(host) |
819 | 365 | ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) | 378 | |
820 | 379 | ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts)) | ||
821 | 380 | |||
822 | 366 | if not context_complete(ctxt): | 381 | if not context_complete(ctxt): |
823 | 367 | return {} | 382 | return {} |
826 | 368 | else: | 383 | |
827 | 369 | return ctxt | 384 | return ctxt |
828 | 370 | 385 | ||
829 | 371 | 386 | ||
830 | 372 | class CephContext(OSContextGenerator): | 387 | class CephContext(OSContextGenerator): |
831 | 388 | """Generates context for /etc/ceph/ceph.conf templates.""" | ||
832 | 373 | interfaces = ['ceph'] | 389 | interfaces = ['ceph'] |
833 | 374 | 390 | ||
834 | 375 | def __call__(self): | 391 | def __call__(self): |
835 | 376 | '''This generates context for /etc/ceph/ceph.conf templates''' | ||
836 | 377 | if not relation_ids('ceph'): | 392 | if not relation_ids('ceph'): |
837 | 378 | return {} | 393 | return {} |
838 | 379 | 394 | ||
841 | 380 | log('Generating template context for ceph') | 395 | log('Generating template context for ceph', level=DEBUG) |
840 | 381 | |||
842 | 382 | mon_hosts = [] | 396 | mon_hosts = [] |
843 | 383 | auth = None | 397 | auth = None |
844 | 384 | key = None | 398 | key = None |
845 | @@ -387,18 +401,18 @@ | |||
846 | 387 | for unit in related_units(rid): | 401 | for unit in related_units(rid): |
847 | 388 | auth = relation_get('auth', rid=rid, unit=unit) | 402 | auth = relation_get('auth', rid=rid, unit=unit) |
848 | 389 | key = relation_get('key', rid=rid, unit=unit) | 403 | key = relation_get('key', rid=rid, unit=unit) |
852 | 390 | ceph_addr = \ | 404 | ceph_pub_addr = relation_get('ceph-public-address', rid=rid, |
853 | 391 | relation_get('ceph-public-address', rid=rid, unit=unit) or \ | 405 | unit=unit) |
854 | 392 | relation_get('private-address', rid=rid, unit=unit) | 406 | unit_priv_addr = relation_get('private-address', rid=rid, |
855 | 407 | unit=unit) | ||
856 | 408 | ceph_addr = ceph_pub_addr or unit_priv_addr | ||
857 | 393 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr | 409 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr |
858 | 394 | mon_hosts.append(ceph_addr) | 410 | mon_hosts.append(ceph_addr) |
859 | 395 | 411 | ||
866 | 396 | ctxt = { | 412 | ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)), |
867 | 397 | 'mon_hosts': ' '.join(mon_hosts), | 413 | 'auth': auth, |
868 | 398 | 'auth': auth, | 414 | 'key': key, |
869 | 399 | 'key': key, | 415 | 'use_syslog': use_syslog} |
864 | 400 | 'use_syslog': use_syslog | ||
865 | 401 | } | ||
870 | 402 | 416 | ||
871 | 403 | if not os.path.isdir('/etc/ceph'): | 417 | if not os.path.isdir('/etc/ceph'): |
872 | 404 | os.mkdir('/etc/ceph') | 418 | os.mkdir('/etc/ceph') |
873 | @@ -407,79 +421,68 @@ | |||
874 | 407 | return {} | 421 | return {} |
875 | 408 | 422 | ||
876 | 409 | ensure_packages(['ceph-common']) | 423 | ensure_packages(['ceph-common']) |
877 | 410 | |||
878 | 411 | return ctxt | 424 | return ctxt |
879 | 412 | 425 | ||
880 | 413 | 426 | ||
881 | 414 | ADDRESS_TYPES = ['admin', 'internal', 'public'] | ||
882 | 415 | |||
883 | 416 | |||
884 | 417 | class HAProxyContext(OSContextGenerator): | 427 | class HAProxyContext(OSContextGenerator): |
885 | 428 | """Provides half a context for the haproxy template, which describes | ||
886 | 429 | all peers to be included in the cluster. Each charm needs to include | ||
887 | 430 | its own context generator that describes the port mapping. | ||
888 | 431 | """ | ||
889 | 418 | interfaces = ['cluster'] | 432 | interfaces = ['cluster'] |
890 | 419 | 433 | ||
891 | 434 | def __init__(self, singlenode_mode=False): | ||
892 | 435 | self.singlenode_mode = singlenode_mode | ||
893 | 436 | |||
894 | 420 | def __call__(self): | 437 | def __call__(self): |
901 | 421 | ''' | 438 | if not relation_ids('cluster') and not self.singlenode_mode: |
896 | 422 | Builds half a context for the haproxy template, which describes | ||
897 | 423 | all peers to be included in the cluster. Each charm needs to include | ||
898 | 424 | its own context generator that describes the port mapping. | ||
899 | 425 | ''' | ||
900 | 426 | if not relation_ids('cluster'): | ||
902 | 427 | return {} | 439 | return {} |
903 | 428 | 440 | ||
904 | 429 | l_unit = local_unit().replace('/', '-') | ||
905 | 430 | |||
906 | 431 | if config('prefer-ipv6'): | 441 | if config('prefer-ipv6'): |
907 | 432 | addr = get_ipv6_addr(exc_list=[config('vip')])[0] | 442 | addr = get_ipv6_addr(exc_list=[config('vip')])[0] |
908 | 433 | else: | 443 | else: |
909 | 434 | addr = get_host_ip(unit_get('private-address')) | 444 | addr = get_host_ip(unit_get('private-address')) |
910 | 435 | 445 | ||
911 | 446 | l_unit = local_unit().replace('/', '-') | ||
912 | 436 | cluster_hosts = {} | 447 | cluster_hosts = {} |
913 | 437 | 448 | ||
914 | 438 | # NOTE(jamespage): build out map of configured network endpoints | 449 | # NOTE(jamespage): build out map of configured network endpoints |
915 | 439 | # and associated backends | 450 | # and associated backends |
916 | 440 | for addr_type in ADDRESS_TYPES: | 451 | for addr_type in ADDRESS_TYPES: |
919 | 441 | laddr = get_address_in_network( | 452 | cfg_opt = 'os-{}-network'.format(addr_type) |
920 | 442 | config('os-{}-network'.format(addr_type))) | 453 | laddr = get_address_in_network(config(cfg_opt)) |
921 | 443 | if laddr: | 454 | if laddr: |
929 | 444 | cluster_hosts[laddr] = {} | 455 | netmask = get_netmask_for_address(laddr) |
930 | 445 | cluster_hosts[laddr]['network'] = "{}/{}".format( | 456 | cluster_hosts[laddr] = {'network': "{}/{}".format(laddr, |
931 | 446 | laddr, | 457 | netmask), |
932 | 447 | get_netmask_for_address(laddr) | 458 | 'backends': {l_unit: laddr}} |
926 | 448 | ) | ||
927 | 449 | cluster_hosts[laddr]['backends'] = {} | ||
928 | 450 | cluster_hosts[laddr]['backends'][l_unit] = laddr | ||
933 | 451 | for rid in relation_ids('cluster'): | 459 | for rid in relation_ids('cluster'): |
934 | 452 | for unit in related_units(rid): | 460 | for unit in related_units(rid): |
935 | 453 | _unit = unit.replace('/', '-') | ||
936 | 454 | _laddr = relation_get('{}-address'.format(addr_type), | 461 | _laddr = relation_get('{}-address'.format(addr_type), |
937 | 455 | rid=rid, unit=unit) | 462 | rid=rid, unit=unit) |
938 | 456 | if _laddr: | 463 | if _laddr: |
939 | 464 | _unit = unit.replace('/', '-') | ||
940 | 457 | cluster_hosts[laddr]['backends'][_unit] = _laddr | 465 | cluster_hosts[laddr]['backends'][_unit] = _laddr |
941 | 458 | 466 | ||
942 | 459 | # NOTE(jamespage) no split configurations found, just use | 467 | # NOTE(jamespage) no split configurations found, just use |
943 | 460 | # private addresses | 468 | # private addresses |
944 | 461 | if not cluster_hosts: | 469 | if not cluster_hosts: |
952 | 462 | cluster_hosts[addr] = {} | 470 | netmask = get_netmask_for_address(addr) |
953 | 463 | cluster_hosts[addr]['network'] = "{}/{}".format( | 471 | cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask), |
954 | 464 | addr, | 472 | 'backends': {l_unit: addr}} |
948 | 465 | get_netmask_for_address(addr) | ||
949 | 466 | ) | ||
950 | 467 | cluster_hosts[addr]['backends'] = {} | ||
951 | 468 | cluster_hosts[addr]['backends'][l_unit] = addr | ||
955 | 469 | for rid in relation_ids('cluster'): | 473 | for rid in relation_ids('cluster'): |
956 | 470 | for unit in related_units(rid): | 474 | for unit in related_units(rid): |
957 | 471 | _unit = unit.replace('/', '-') | ||
958 | 472 | _laddr = relation_get('private-address', | 475 | _laddr = relation_get('private-address', |
959 | 473 | rid=rid, unit=unit) | 476 | rid=rid, unit=unit) |
960 | 474 | if _laddr: | 477 | if _laddr: |
961 | 478 | _unit = unit.replace('/', '-') | ||
962 | 475 | cluster_hosts[addr]['backends'][_unit] = _laddr | 479 | cluster_hosts[addr]['backends'][_unit] = _laddr |
963 | 476 | 480 | ||
967 | 477 | ctxt = { | 481 | ctxt = {'frontends': cluster_hosts} |
965 | 478 | 'frontends': cluster_hosts, | ||
966 | 479 | } | ||
968 | 480 | 482 | ||
969 | 481 | if config('haproxy-server-timeout'): | 483 | if config('haproxy-server-timeout'): |
970 | 482 | ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') | 484 | ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') |
971 | 485 | |||
972 | 483 | if config('haproxy-client-timeout'): | 486 | if config('haproxy-client-timeout'): |
973 | 484 | ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') | 487 | ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') |
974 | 485 | 488 | ||
975 | @@ -493,13 +496,18 @@ | |||
976 | 493 | ctxt['stat_port'] = ':8888' | 496 | ctxt['stat_port'] = ':8888' |
977 | 494 | 497 | ||
978 | 495 | for frontend in cluster_hosts: | 498 | for frontend in cluster_hosts: |
980 | 496 | if len(cluster_hosts[frontend]['backends']) > 1: | 499 | if (len(cluster_hosts[frontend]['backends']) > 1 or |
981 | 500 | self.singlenode_mode): | ||
982 | 497 | # Enable haproxy when we have enough peers. | 501 | # Enable haproxy when we have enough peers. |
984 | 498 | log('Ensuring haproxy enabled in /etc/default/haproxy.') | 502 | log('Ensuring haproxy enabled in /etc/default/haproxy.', |
985 | 503 | level=DEBUG) | ||
986 | 499 | with open('/etc/default/haproxy', 'w') as out: | 504 | with open('/etc/default/haproxy', 'w') as out: |
987 | 500 | out.write('ENABLED=1\n') | 505 | out.write('ENABLED=1\n') |
988 | 506 | |||
989 | 501 | return ctxt | 507 | return ctxt |
991 | 502 | log('HAProxy context is incomplete, this unit has no peers.') | 508 | |
992 | 509 | log('HAProxy context is incomplete, this unit has no peers.', | ||
993 | 510 | level=INFO) | ||
994 | 503 | return {} | 511 | return {} |
995 | 504 | 512 | ||
996 | 505 | 513 | ||
997 | @@ -507,29 +515,28 @@ | |||
998 | 507 | interfaces = ['image-service'] | 515 | interfaces = ['image-service'] |
999 | 508 | 516 | ||
1000 | 509 | def __call__(self): | 517 | def __call__(self): |
1006 | 510 | ''' | 518 | """Obtains the glance API server from the image-service relation. |
1007 | 511 | Obtains the glance API server from the image-service relation. Useful | 519 | Useful in nova and cinder (currently). |
1008 | 512 | in nova and cinder (currently). | 520 | """ |
1009 | 513 | ''' | 521 | log('Generating template context for image-service.', level=DEBUG) |
1005 | 514 | log('Generating template context for image-service.') | ||
1010 | 515 | rids = relation_ids('image-service') | 522 | rids = relation_ids('image-service') |
1011 | 516 | if not rids: | 523 | if not rids: |
1012 | 517 | return {} | 524 | return {} |
1013 | 525 | |||
1014 | 518 | for rid in rids: | 526 | for rid in rids: |
1015 | 519 | for unit in related_units(rid): | 527 | for unit in related_units(rid): |
1016 | 520 | api_server = relation_get('glance-api-server', | 528 | api_server = relation_get('glance-api-server', |
1017 | 521 | rid=rid, unit=unit) | 529 | rid=rid, unit=unit) |
1018 | 522 | if api_server: | 530 | if api_server: |
1019 | 523 | return {'glance_api_servers': api_server} | 531 | return {'glance_api_servers': api_server} |
1022 | 524 | log('ImageService context is incomplete. ' | 532 | |
1023 | 525 | 'Missing required relation data.') | 533 | log("ImageService context is incomplete. Missing required relation " |
1024 | 534 | "data.", level=INFO) | ||
1025 | 526 | return {} | 535 | return {} |
1026 | 527 | 536 | ||
1027 | 528 | 537 | ||
1028 | 529 | class ApacheSSLContext(OSContextGenerator): | 538 | class ApacheSSLContext(OSContextGenerator): |
1032 | 530 | 539 | """Generates a context for an apache vhost configuration that configures | |
1030 | 531 | """ | ||
1031 | 532 | Generates a context for an apache vhost configuration that configures | ||
1033 | 533 | HTTPS reverse proxying for one or many endpoints. Generated context | 540 | HTTPS reverse proxying for one or many endpoints. Generated context |
1034 | 534 | looks something like:: | 541 | looks something like:: |
1035 | 535 | 542 | ||
1036 | @@ -563,6 +570,7 @@ | |||
1037 | 563 | else: | 570 | else: |
1038 | 564 | cert_filename = 'cert' | 571 | cert_filename = 'cert' |
1039 | 565 | key_filename = 'key' | 572 | key_filename = 'key' |
1040 | 573 | |||
1041 | 566 | write_file(path=os.path.join(ssl_dir, cert_filename), | 574 | write_file(path=os.path.join(ssl_dir, cert_filename), |
1042 | 567 | content=b64decode(cert)) | 575 | content=b64decode(cert)) |
1043 | 568 | write_file(path=os.path.join(ssl_dir, key_filename), | 576 | write_file(path=os.path.join(ssl_dir, key_filename), |
1044 | @@ -574,7 +582,8 @@ | |||
1045 | 574 | install_ca_cert(b64decode(ca_cert)) | 582 | install_ca_cert(b64decode(ca_cert)) |
1046 | 575 | 583 | ||
1047 | 576 | def canonical_names(self): | 584 | def canonical_names(self): |
1049 | 577 | '''Figure out which canonical names clients will access this service''' | 585 | """Figure out which canonical names clients will access this service. |
1050 | 586 | """ | ||
1051 | 578 | cns = [] | 587 | cns = [] |
1052 | 579 | for r_id in relation_ids('identity-service'): | 588 | for r_id in relation_ids('identity-service'): |
1053 | 580 | for unit in related_units(r_id): | 589 | for unit in related_units(r_id): |
1054 | @@ -582,55 +591,80 @@ | |||
1055 | 582 | for k in rdata: | 591 | for k in rdata: |
1056 | 583 | if k.startswith('ssl_key_'): | 592 | if k.startswith('ssl_key_'): |
1057 | 584 | cns.append(k.lstrip('ssl_key_')) | 593 | cns.append(k.lstrip('ssl_key_')) |
1059 | 585 | return list(set(cns)) | 594 | |
1060 | 595 | return sorted(list(set(cns))) | ||
1061 | 596 | |||
1062 | 597 | def get_network_addresses(self): | ||
1063 | 598 | """For each network configured, return corresponding address and vip | ||
1064 | 599 | (if available). | ||
1065 | 600 | |||
1066 | 601 | Returns a list of tuples of the form: | ||
1067 | 602 | |||
1068 | 603 | [(address_in_net_a, vip_in_net_a), | ||
1069 | 604 | (address_in_net_b, vip_in_net_b), | ||
1070 | 605 | ...] | ||
1071 | 606 | |||
1072 | 607 | or, if no vip(s) available: | ||
1073 | 608 | |||
1074 | 609 | [(address_in_net_a, address_in_net_a), | ||
1075 | 610 | (address_in_net_b, address_in_net_b), | ||
1076 | 611 | ...] | ||
1077 | 612 | """ | ||
1078 | 613 | addresses = [] | ||
1079 | 614 | if config('vip'): | ||
1080 | 615 | vips = config('vip').split() | ||
1081 | 616 | else: | ||
1082 | 617 | vips = [] | ||
1083 | 618 | |||
1084 | 619 | for net_type in ['os-internal-network', 'os-admin-network', | ||
1085 | 620 | 'os-public-network']: | ||
1086 | 621 | addr = get_address_in_network(config(net_type), | ||
1087 | 622 | unit_get('private-address')) | ||
1088 | 623 | if len(vips) > 1 and is_clustered(): | ||
1089 | 624 | if not config(net_type): | ||
1090 | 625 | log("Multiple networks configured but net_type " | ||
1091 | 626 | "is None (%s)." % net_type, level=WARNING) | ||
1092 | 627 | continue | ||
1093 | 628 | |||
1094 | 629 | for vip in vips: | ||
1095 | 630 | if is_address_in_network(config(net_type), vip): | ||
1096 | 631 | addresses.append((addr, vip)) | ||
1097 | 632 | break | ||
1098 | 633 | |||
1099 | 634 | elif is_clustered() and config('vip'): | ||
1100 | 635 | addresses.append((addr, config('vip'))) | ||
1101 | 636 | else: | ||
1102 | 637 | addresses.append((addr, addr)) | ||
1103 | 638 | |||
1104 | 639 | return sorted(addresses) | ||
1105 | 586 | 640 | ||
1106 | 587 | def __call__(self): | 641 | def __call__(self): |
1108 | 588 | if isinstance(self.external_ports, basestring): | 642 | if isinstance(self.external_ports, six.string_types): |
1109 | 589 | self.external_ports = [self.external_ports] | 643 | self.external_ports = [self.external_ports] |
1111 | 590 | if (not self.external_ports or not https()): | 644 | |
1112 | 645 | if not self.external_ports or not https(): | ||
1113 | 591 | return {} | 646 | return {} |
1114 | 592 | 647 | ||
1115 | 593 | self.configure_ca() | 648 | self.configure_ca() |
1116 | 594 | self.enable_modules() | 649 | self.enable_modules() |
1117 | 595 | 650 | ||
1123 | 596 | ctxt = { | 651 | ctxt = {'namespace': self.service_namespace, |
1124 | 597 | 'namespace': self.service_namespace, | 652 | 'endpoints': [], |
1125 | 598 | 'endpoints': [], | 653 | 'ext_ports': []} |
1121 | 599 | 'ext_ports': [] | ||
1122 | 600 | } | ||
1126 | 601 | 654 | ||
1127 | 602 | for cn in self.canonical_names(): | 655 | for cn in self.canonical_names(): |
1128 | 603 | self.configure_cert(cn) | 656 | self.configure_cert(cn) |
1129 | 604 | 657 | ||
1152 | 605 | addresses = [] | 658 | addresses = self.get_network_addresses() |
1153 | 606 | vips = [] | 659 | for address, endpoint in sorted(set(addresses)): |
1132 | 607 | if config('vip'): | ||
1133 | 608 | vips = config('vip').split() | ||
1134 | 609 | |||
1135 | 610 | for network_type in ['os-internal-network', | ||
1136 | 611 | 'os-admin-network', | ||
1137 | 612 | 'os-public-network']: | ||
1138 | 613 | address = get_address_in_network(config(network_type), | ||
1139 | 614 | unit_get('private-address')) | ||
1140 | 615 | if len(vips) > 0 and is_clustered(): | ||
1141 | 616 | for vip in vips: | ||
1142 | 617 | if is_address_in_network(config(network_type), | ||
1143 | 618 | vip): | ||
1144 | 619 | addresses.append((address, vip)) | ||
1145 | 620 | break | ||
1146 | 621 | elif is_clustered(): | ||
1147 | 622 | addresses.append((address, config('vip'))) | ||
1148 | 623 | else: | ||
1149 | 624 | addresses.append((address, address)) | ||
1150 | 625 | |||
1151 | 626 | for address, endpoint in set(addresses): | ||
1154 | 627 | for api_port in self.external_ports: | 660 | for api_port in self.external_ports: |
1155 | 628 | ext_port = determine_apache_port(api_port) | 661 | ext_port = determine_apache_port(api_port) |
1156 | 629 | int_port = determine_api_port(api_port) | 662 | int_port = determine_api_port(api_port) |
1157 | 630 | portmap = (address, endpoint, int(ext_port), int(int_port)) | 663 | portmap = (address, endpoint, int(ext_port), int(int_port)) |
1158 | 631 | ctxt['endpoints'].append(portmap) | 664 | ctxt['endpoints'].append(portmap) |
1159 | 632 | ctxt['ext_ports'].append(int(ext_port)) | 665 | ctxt['ext_ports'].append(int(ext_port)) |
1161 | 633 | ctxt['ext_ports'] = list(set(ctxt['ext_ports'])) | 666 | |
1162 | 667 | ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports']))) | ||
1163 | 634 | return ctxt | 668 | return ctxt |
1164 | 635 | 669 | ||
1165 | 636 | 670 | ||
1166 | @@ -647,21 +681,23 @@ | |||
1167 | 647 | 681 | ||
1168 | 648 | @property | 682 | @property |
1169 | 649 | def packages(self): | 683 | def packages(self): |
1172 | 650 | return neutron_plugin_attribute( | 684 | return neutron_plugin_attribute(self.plugin, 'packages', |
1173 | 651 | self.plugin, 'packages', self.network_manager) | 685 | self.network_manager) |
1174 | 652 | 686 | ||
1175 | 653 | @property | 687 | @property |
1176 | 654 | def neutron_security_groups(self): | 688 | def neutron_security_groups(self): |
1177 | 655 | return None | 689 | return None |
1178 | 656 | 690 | ||
1179 | 657 | def _ensure_packages(self): | 691 | def _ensure_packages(self): |
1181 | 658 | [ensure_packages(pkgs) for pkgs in self.packages] | 692 | for pkgs in self.packages: |
1182 | 693 | ensure_packages(pkgs) | ||
1183 | 659 | 694 | ||
1184 | 660 | def _save_flag_file(self): | 695 | def _save_flag_file(self): |
1185 | 661 | if self.network_manager == 'quantum': | 696 | if self.network_manager == 'quantum': |
1186 | 662 | _file = '/etc/nova/quantum_plugin.conf' | 697 | _file = '/etc/nova/quantum_plugin.conf' |
1187 | 663 | else: | 698 | else: |
1188 | 664 | _file = '/etc/nova/neutron_plugin.conf' | 699 | _file = '/etc/nova/neutron_plugin.conf' |
1189 | 700 | |||
1190 | 665 | with open(_file, 'wb') as out: | 701 | with open(_file, 'wb') as out: |
1191 | 666 | out.write(self.plugin + '\n') | 702 | out.write(self.plugin + '\n') |
1192 | 667 | 703 | ||
1193 | @@ -670,13 +706,11 @@ | |||
1194 | 670 | self.network_manager) | 706 | self.network_manager) |
1195 | 671 | config = neutron_plugin_attribute(self.plugin, 'config', | 707 | config = neutron_plugin_attribute(self.plugin, 'config', |
1196 | 672 | self.network_manager) | 708 | self.network_manager) |
1204 | 673 | ovs_ctxt = { | 709 | ovs_ctxt = {'core_plugin': driver, |
1205 | 674 | 'core_plugin': driver, | 710 | 'neutron_plugin': 'ovs', |
1206 | 675 | 'neutron_plugin': 'ovs', | 711 | 'neutron_security_groups': self.neutron_security_groups, |
1207 | 676 | 'neutron_security_groups': self.neutron_security_groups, | 712 | 'local_ip': unit_private_ip(), |
1208 | 677 | 'local_ip': unit_private_ip(), | 713 | 'config': config} |
1202 | 678 | 'config': config | ||
1203 | 679 | } | ||
1209 | 680 | 714 | ||
1210 | 681 | return ovs_ctxt | 715 | return ovs_ctxt |
1211 | 682 | 716 | ||
1212 | @@ -685,13 +719,11 @@ | |||
1213 | 685 | self.network_manager) | 719 | self.network_manager) |
1214 | 686 | config = neutron_plugin_attribute(self.plugin, 'config', | 720 | config = neutron_plugin_attribute(self.plugin, 'config', |
1215 | 687 | self.network_manager) | 721 | self.network_manager) |
1223 | 688 | nvp_ctxt = { | 722 | nvp_ctxt = {'core_plugin': driver, |
1224 | 689 | 'core_plugin': driver, | 723 | 'neutron_plugin': 'nvp', |
1225 | 690 | 'neutron_plugin': 'nvp', | 724 | 'neutron_security_groups': self.neutron_security_groups, |
1226 | 691 | 'neutron_security_groups': self.neutron_security_groups, | 725 | 'local_ip': unit_private_ip(), |
1227 | 692 | 'local_ip': unit_private_ip(), | 726 | 'config': config} |
1221 | 693 | 'config': config | ||
1222 | 694 | } | ||
1228 | 695 | 727 | ||
1229 | 696 | return nvp_ctxt | 728 | return nvp_ctxt |
1230 | 697 | 729 | ||
1231 | @@ -700,35 +732,50 @@ | |||
1232 | 700 | self.network_manager) | 732 | self.network_manager) |
1233 | 701 | n1kv_config = neutron_plugin_attribute(self.plugin, 'config', | 733 | n1kv_config = neutron_plugin_attribute(self.plugin, 'config', |
1234 | 702 | self.network_manager) | 734 | self.network_manager) |
1247 | 703 | n1kv_ctxt = { | 735 | n1kv_user_config_flags = config('n1kv-config-flags') |
1248 | 704 | 'core_plugin': driver, | 736 | restrict_policy_profiles = config('n1kv-restrict-policy-profiles') |
1249 | 705 | 'neutron_plugin': 'n1kv', | 737 | n1kv_ctxt = {'core_plugin': driver, |
1250 | 706 | 'neutron_security_groups': self.neutron_security_groups, | 738 | 'neutron_plugin': 'n1kv', |
1251 | 707 | 'local_ip': unit_private_ip(), | 739 | 'neutron_security_groups': self.neutron_security_groups, |
1252 | 708 | 'config': n1kv_config, | 740 | 'local_ip': unit_private_ip(), |
1253 | 709 | 'vsm_ip': config('n1kv-vsm-ip'), | 741 | 'config': n1kv_config, |
1254 | 710 | 'vsm_username': config('n1kv-vsm-username'), | 742 | 'vsm_ip': config('n1kv-vsm-ip'), |
1255 | 711 | 'vsm_password': config('n1kv-vsm-password'), | 743 | 'vsm_username': config('n1kv-vsm-username'), |
1256 | 712 | 'restrict_policy_profiles': config( | 744 | 'vsm_password': config('n1kv-vsm-password'), |
1257 | 713 | 'n1kv_restrict_policy_profiles'), | 745 | 'restrict_policy_profiles': restrict_policy_profiles} |
1258 | 714 | } | 746 | |
1259 | 747 | if n1kv_user_config_flags: | ||
1260 | 748 | flags = config_flags_parser(n1kv_user_config_flags) | ||
1261 | 749 | n1kv_ctxt['user_config_flags'] = flags | ||
1262 | 715 | 750 | ||
1263 | 716 | return n1kv_ctxt | 751 | return n1kv_ctxt |
1264 | 717 | 752 | ||
1265 | 753 | def calico_ctxt(self): | ||
1266 | 754 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1267 | 755 | self.network_manager) | ||
1268 | 756 | config = neutron_plugin_attribute(self.plugin, 'config', | ||
1269 | 757 | self.network_manager) | ||
1270 | 758 | calico_ctxt = {'core_plugin': driver, | ||
1271 | 759 | 'neutron_plugin': 'Calico', | ||
1272 | 760 | 'neutron_security_groups': self.neutron_security_groups, | ||
1273 | 761 | 'local_ip': unit_private_ip(), | ||
1274 | 762 | 'config': config} | ||
1275 | 763 | |||
1276 | 764 | return calico_ctxt | ||
1277 | 765 | |||
1278 | 718 | def neutron_ctxt(self): | 766 | def neutron_ctxt(self): |
1279 | 719 | if https(): | 767 | if https(): |
1280 | 720 | proto = 'https' | 768 | proto = 'https' |
1281 | 721 | else: | 769 | else: |
1282 | 722 | proto = 'http' | 770 | proto = 'http' |
1283 | 771 | |||
1284 | 723 | if is_clustered(): | 772 | if is_clustered(): |
1285 | 724 | host = config('vip') | 773 | host = config('vip') |
1286 | 725 | else: | 774 | else: |
1287 | 726 | host = unit_get('private-address') | 775 | host = unit_get('private-address') |
1293 | 727 | url = '%s://%s:%s' % (proto, host, '9696') | 776 | |
1294 | 728 | ctxt = { | 777 | ctxt = {'network_manager': self.network_manager, |
1295 | 729 | 'network_manager': self.network_manager, | 778 | 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} |
1291 | 730 | 'neutron_url': url, | ||
1292 | 731 | } | ||
1296 | 732 | return ctxt | 779 | return ctxt |
1297 | 733 | 780 | ||
1298 | 734 | def __call__(self): | 781 | def __call__(self): |
1299 | @@ -748,6 +795,8 @@ | |||
1300 | 748 | ctxt.update(self.nvp_ctxt()) | 795 | ctxt.update(self.nvp_ctxt()) |
1301 | 749 | elif self.plugin == 'n1kv': | 796 | elif self.plugin == 'n1kv': |
1302 | 750 | ctxt.update(self.n1kv_ctxt()) | 797 | ctxt.update(self.n1kv_ctxt()) |
1303 | 798 | elif self.plugin == 'Calico': | ||
1304 | 799 | ctxt.update(self.calico_ctxt()) | ||
1305 | 751 | 800 | ||
1306 | 752 | alchemy_flags = config('neutron-alchemy-flags') | 801 | alchemy_flags = config('neutron-alchemy-flags') |
1307 | 753 | if alchemy_flags: | 802 | if alchemy_flags: |
1308 | @@ -759,23 +808,40 @@ | |||
1309 | 759 | 808 | ||
1310 | 760 | 809 | ||
1311 | 761 | class OSConfigFlagContext(OSContextGenerator): | 810 | class OSConfigFlagContext(OSContextGenerator): |
1316 | 762 | 811 | """Provides support for user-defined config flags. | |
1317 | 763 | """ | 812 | |
1318 | 764 | Responsible for adding user-defined config-flags in charm config to a | 813 | Users can define a comma-seperated list of key=value pairs |
1319 | 765 | template context. | 814 | in the charm configuration and apply them at any point in |
1320 | 815 | any file by using a template flag. | ||
1321 | 816 | |||
1322 | 817 | Sometimes users might want config flags inserted within a | ||
1323 | 818 | specific section so this class allows users to specify the | ||
1324 | 819 | template flag name, allowing for multiple template flags | ||
1325 | 820 | (sections) within the same context. | ||
1326 | 766 | 821 | ||
1327 | 767 | NOTE: the value of config-flags may be a comma-separated list of | 822 | NOTE: the value of config-flags may be a comma-separated list of |
1328 | 768 | key=value pairs and some Openstack config files support | 823 | key=value pairs and some Openstack config files support |
1329 | 769 | comma-separated lists as values. | 824 | comma-separated lists as values. |
1330 | 770 | """ | 825 | """ |
1331 | 771 | 826 | ||
1332 | 827 | def __init__(self, charm_flag='config-flags', | ||
1333 | 828 | template_flag='user_config_flags'): | ||
1334 | 829 | """ | ||
1335 | 830 | :param charm_flag: config flags in charm configuration. | ||
1336 | 831 | :param template_flag: insert point for user-defined flags in template | ||
1337 | 832 | file. | ||
1338 | 833 | """ | ||
1339 | 834 | super(OSConfigFlagContext, self).__init__() | ||
1340 | 835 | self._charm_flag = charm_flag | ||
1341 | 836 | self._template_flag = template_flag | ||
1342 | 837 | |||
1343 | 772 | def __call__(self): | 838 | def __call__(self): |
1345 | 773 | config_flags = config('config-flags') | 839 | config_flags = config(self._charm_flag) |
1346 | 774 | if not config_flags: | 840 | if not config_flags: |
1347 | 775 | return {} | 841 | return {} |
1348 | 776 | 842 | ||
1351 | 777 | flags = config_flags_parser(config_flags) | 843 | return {self._template_flag: |
1352 | 778 | return {'user_config_flags': flags} | 844 | config_flags_parser(config_flags)} |
1353 | 779 | 845 | ||
1354 | 780 | 846 | ||
1355 | 781 | class SubordinateConfigContext(OSContextGenerator): | 847 | class SubordinateConfigContext(OSContextGenerator): |
1356 | @@ -819,7 +885,6 @@ | |||
1357 | 819 | }, | 885 | }, |
1358 | 820 | } | 886 | } |
1359 | 821 | } | 887 | } |
1360 | 822 | |||
1361 | 823 | """ | 888 | """ |
1362 | 824 | 889 | ||
1363 | 825 | def __init__(self, service, config_file, interface): | 890 | def __init__(self, service, config_file, interface): |
1364 | @@ -849,26 +914,28 @@ | |||
1365 | 849 | 914 | ||
1366 | 850 | if self.service not in sub_config: | 915 | if self.service not in sub_config: |
1367 | 851 | log('Found subordinate_config on %s but it contained' | 916 | log('Found subordinate_config on %s but it contained' |
1369 | 852 | 'nothing for %s service' % (rid, self.service)) | 917 | 'nothing for %s service' % (rid, self.service), |
1370 | 918 | level=INFO) | ||
1371 | 853 | continue | 919 | continue |
1372 | 854 | 920 | ||
1373 | 855 | sub_config = sub_config[self.service] | 921 | sub_config = sub_config[self.service] |
1374 | 856 | if self.config_file not in sub_config: | 922 | if self.config_file not in sub_config: |
1375 | 857 | log('Found subordinate_config on %s but it contained' | 923 | log('Found subordinate_config on %s but it contained' |
1377 | 858 | 'nothing for %s' % (rid, self.config_file)) | 924 | 'nothing for %s' % (rid, self.config_file), |
1378 | 925 | level=INFO) | ||
1379 | 859 | continue | 926 | continue |
1380 | 860 | 927 | ||
1381 | 861 | sub_config = sub_config[self.config_file] | 928 | sub_config = sub_config[self.config_file] |
1383 | 862 | for k, v in sub_config.iteritems(): | 929 | for k, v in six.iteritems(sub_config): |
1384 | 863 | if k == 'sections': | 930 | if k == 'sections': |
1387 | 864 | for section, config_dict in v.iteritems(): | 931 | for section, config_dict in six.iteritems(v): |
1388 | 865 | log("adding section '%s'" % (section)) | 932 | log("adding section '%s'" % (section), |
1389 | 933 | level=DEBUG) | ||
1390 | 866 | ctxt[k][section] = config_dict | 934 | ctxt[k][section] = config_dict |
1391 | 867 | else: | 935 | else: |
1392 | 868 | ctxt[k] = v | 936 | ctxt[k] = v |
1393 | 869 | 937 | ||
1396 | 870 | log("%d section(s) found" % (len(ctxt['sections'])), level=INFO) | 938 | log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) |
1395 | 871 | |||
1397 | 872 | return ctxt | 939 | return ctxt |
1398 | 873 | 940 | ||
1399 | 874 | 941 | ||
1400 | @@ -880,15 +947,14 @@ | |||
1401 | 880 | False if config('debug') is None else config('debug') | 947 | False if config('debug') is None else config('debug') |
1402 | 881 | ctxt['verbose'] = \ | 948 | ctxt['verbose'] = \ |
1403 | 882 | False if config('verbose') is None else config('verbose') | 949 | False if config('verbose') is None else config('verbose') |
1404 | 950 | |||
1405 | 883 | return ctxt | 951 | return ctxt |
1406 | 884 | 952 | ||
1407 | 885 | 953 | ||
1408 | 886 | class SyslogContext(OSContextGenerator): | 954 | class SyslogContext(OSContextGenerator): |
1409 | 887 | 955 | ||
1410 | 888 | def __call__(self): | 956 | def __call__(self): |
1414 | 889 | ctxt = { | 957 | ctxt = {'use_syslog': config('use-syslog')} |
1412 | 890 | 'use_syslog': config('use-syslog') | ||
1413 | 891 | } | ||
1415 | 892 | return ctxt | 958 | return ctxt |
1416 | 893 | 959 | ||
1417 | 894 | 960 | ||
1418 | @@ -896,13 +962,9 @@ | |||
1419 | 896 | 962 | ||
1420 | 897 | def __call__(self): | 963 | def __call__(self): |
1421 | 898 | if config('prefer-ipv6'): | 964 | if config('prefer-ipv6'): |
1425 | 899 | return { | 965 | return {'bind_host': '::'} |
1423 | 900 | 'bind_host': '::' | ||
1424 | 901 | } | ||
1426 | 902 | else: | 966 | else: |
1430 | 903 | return { | 967 | return {'bind_host': '0.0.0.0'} |
1428 | 904 | 'bind_host': '0.0.0.0' | ||
1429 | 905 | } | ||
1431 | 906 | 968 | ||
1432 | 907 | 969 | ||
1433 | 908 | class WorkerConfigContext(OSContextGenerator): | 970 | class WorkerConfigContext(OSContextGenerator): |
1434 | @@ -914,11 +976,42 @@ | |||
1435 | 914 | except ImportError: | 976 | except ImportError: |
1436 | 915 | apt_install('python-psutil', fatal=True) | 977 | apt_install('python-psutil', fatal=True) |
1437 | 916 | from psutil import NUM_CPUS | 978 | from psutil import NUM_CPUS |
1438 | 979 | |||
1439 | 917 | return NUM_CPUS | 980 | return NUM_CPUS |
1440 | 918 | 981 | ||
1441 | 919 | def __call__(self): | 982 | def __call__(self): |
1446 | 920 | multiplier = config('worker-multiplier') or 1 | 983 | multiplier = config('worker-multiplier') or 0 |
1447 | 921 | ctxt = { | 984 | ctxt = {"workers": self.num_cpus * multiplier} |
1448 | 922 | "workers": self.num_cpus * multiplier | 985 | return ctxt |
1449 | 923 | } | 986 | |
1450 | 987 | |||
1451 | 988 | class ZeroMQContext(OSContextGenerator): | ||
1452 | 989 | interfaces = ['zeromq-configuration'] | ||
1453 | 990 | |||
1454 | 991 | def __call__(self): | ||
1455 | 992 | ctxt = {} | ||
1456 | 993 | if is_relation_made('zeromq-configuration', 'host'): | ||
1457 | 994 | for rid in relation_ids('zeromq-configuration'): | ||
1458 | 995 | for unit in related_units(rid): | ||
1459 | 996 | ctxt['zmq_nonce'] = relation_get('nonce', unit, rid) | ||
1460 | 997 | ctxt['zmq_host'] = relation_get('host', unit, rid) | ||
1461 | 998 | |||
1462 | 999 | return ctxt | ||
1463 | 1000 | |||
1464 | 1001 | |||
1465 | 1002 | class NotificationDriverContext(OSContextGenerator): | ||
1466 | 1003 | |||
1467 | 1004 | def __init__(self, zmq_relation='zeromq-configuration', | ||
1468 | 1005 | amqp_relation='amqp'): | ||
1469 | 1006 | """ | ||
1470 | 1007 | :param zmq_relation: Name of Zeromq relation to check | ||
1471 | 1008 | """ | ||
1472 | 1009 | self.zmq_relation = zmq_relation | ||
1473 | 1010 | self.amqp_relation = amqp_relation | ||
1474 | 1011 | |||
1475 | 1012 | def __call__(self): | ||
1476 | 1013 | ctxt = {'notifications': 'False'} | ||
1477 | 1014 | if is_relation_made(self.amqp_relation): | ||
1478 | 1015 | ctxt['notifications'] = "True" | ||
1479 | 1016 | |||
1480 | 924 | return ctxt | 1017 | return ctxt |
1481 | 925 | 1018 | ||
1482 | === modified file 'hooks/charmhelpers/contrib/openstack/ip.py' | |||
1483 | --- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-07 21:03:47 +0000 | |||
1484 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-05 07:18:23 +0000 | |||
1485 | @@ -2,21 +2,19 @@ | |||
1486 | 2 | config, | 2 | config, |
1487 | 3 | unit_get, | 3 | unit_get, |
1488 | 4 | ) | 4 | ) |
1489 | 5 | |||
1490 | 6 | from charmhelpers.contrib.network.ip import ( | 5 | from charmhelpers.contrib.network.ip import ( |
1491 | 7 | get_address_in_network, | 6 | get_address_in_network, |
1492 | 8 | is_address_in_network, | 7 | is_address_in_network, |
1493 | 9 | is_ipv6, | 8 | is_ipv6, |
1494 | 10 | get_ipv6_addr, | 9 | get_ipv6_addr, |
1495 | 11 | ) | 10 | ) |
1496 | 12 | |||
1497 | 13 | from charmhelpers.contrib.hahelpers.cluster import is_clustered | 11 | from charmhelpers.contrib.hahelpers.cluster import is_clustered |
1498 | 14 | 12 | ||
1499 | 15 | PUBLIC = 'public' | 13 | PUBLIC = 'public' |
1500 | 16 | INTERNAL = 'int' | 14 | INTERNAL = 'int' |
1501 | 17 | ADMIN = 'admin' | 15 | ADMIN = 'admin' |
1502 | 18 | 16 | ||
1504 | 19 | _address_map = { | 17 | ADDRESS_MAP = { |
1505 | 20 | PUBLIC: { | 18 | PUBLIC: { |
1506 | 21 | 'config': 'os-public-network', | 19 | 'config': 'os-public-network', |
1507 | 22 | 'fallback': 'public-address' | 20 | 'fallback': 'public-address' |
1508 | @@ -33,16 +31,14 @@ | |||
1509 | 33 | 31 | ||
1510 | 34 | 32 | ||
1511 | 35 | def canonical_url(configs, endpoint_type=PUBLIC): | 33 | def canonical_url(configs, endpoint_type=PUBLIC): |
1514 | 36 | ''' | 34 | """Returns the correct HTTP URL to this host given the state of HTTPS |
1513 | 37 | Returns the correct HTTP URL to this host given the state of HTTPS | ||
1515 | 38 | configuration, hacluster and charm configuration. | 35 | configuration, hacluster and charm configuration. |
1516 | 39 | 36 | ||
1523 | 40 | :configs OSTemplateRenderer: A config tempating object to inspect for | 37 | :param configs: OSTemplateRenderer config templating object to inspect |
1524 | 41 | a complete https context. | 38 | for a complete https context. |
1525 | 42 | :endpoint_type str: The endpoint type to resolve. | 39 | :param endpoint_type: str endpoint type to resolve. |
1526 | 43 | 40 | :param returns: str base URL for services on the current service unit. | |
1527 | 44 | :returns str: Base URL for services on the current service unit. | 41 | """ |
1522 | 45 | ''' | ||
1528 | 46 | scheme = 'http' | 42 | scheme = 'http' |
1529 | 47 | if 'https' in configs.complete_contexts(): | 43 | if 'https' in configs.complete_contexts(): |
1530 | 48 | scheme = 'https' | 44 | scheme = 'https' |
1531 | @@ -53,27 +49,45 @@ | |||
1532 | 53 | 49 | ||
1533 | 54 | 50 | ||
1534 | 55 | def resolve_address(endpoint_type=PUBLIC): | 51 | def resolve_address(endpoint_type=PUBLIC): |
1535 | 52 | """Return unit address depending on net config. | ||
1536 | 53 | |||
1537 | 54 | If unit is clustered with vip(s) and has net splits defined, return vip on | ||
1538 | 55 | correct network. If clustered with no nets defined, return primary vip. | ||
1539 | 56 | |||
1540 | 57 | If not clustered, return unit address ensuring address is on configured net | ||
1541 | 58 | split if one is configured. | ||
1542 | 59 | |||
1543 | 60 | :param endpoint_type: Network endpoing type | ||
1544 | 61 | """ | ||
1545 | 56 | resolved_address = None | 62 | resolved_address = None |
1550 | 57 | if is_clustered(): | 63 | vips = config('vip') |
1551 | 58 | if config(_address_map[endpoint_type]['config']) is None: | 64 | if vips: |
1552 | 59 | # Assume vip is simple and pass back directly | 65 | vips = vips.split() |
1553 | 60 | resolved_address = config('vip') | 66 | |
1554 | 67 | net_type = ADDRESS_MAP[endpoint_type]['config'] | ||
1555 | 68 | net_addr = config(net_type) | ||
1556 | 69 | net_fallback = ADDRESS_MAP[endpoint_type]['fallback'] | ||
1557 | 70 | clustered = is_clustered() | ||
1558 | 71 | if clustered: | ||
1559 | 72 | if not net_addr: | ||
1560 | 73 | # If no net-splits defined, we expect a single vip | ||
1561 | 74 | resolved_address = vips[0] | ||
1562 | 61 | else: | 75 | else: |
1567 | 62 | for vip in config('vip').split(): | 76 | for vip in vips: |
1568 | 63 | if is_address_in_network( | 77 | if is_address_in_network(net_addr, vip): |
1565 | 64 | config(_address_map[endpoint_type]['config']), | ||
1566 | 65 | vip): | ||
1569 | 66 | resolved_address = vip | 78 | resolved_address = vip |
1570 | 79 | break | ||
1571 | 67 | else: | 80 | else: |
1572 | 68 | if config('prefer-ipv6'): | 81 | if config('prefer-ipv6'): |
1574 | 69 | fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0] | 82 | fallback_addr = get_ipv6_addr(exc_list=vips)[0] |
1575 | 70 | else: | 83 | else: |
1579 | 71 | fallback_addr = unit_get(_address_map[endpoint_type]['fallback']) | 84 | fallback_addr = unit_get(net_fallback) |
1580 | 72 | resolved_address = get_address_in_network( | 85 | |
1581 | 73 | config(_address_map[endpoint_type]['config']), fallback_addr) | 86 | resolved_address = get_address_in_network(net_addr, fallback_addr) |
1582 | 74 | 87 | ||
1583 | 75 | if resolved_address is None: | 88 | if resolved_address is None: |
1588 | 76 | raise ValueError('Unable to resolve a suitable IP address' | 89 | raise ValueError("Unable to resolve a suitable IP address based on " |
1589 | 77 | ' based on charm state and configuration') | 90 | "charm state and configuration. (net_type=%s, " |
1590 | 78 | else: | 91 | "clustered=%s)" % (net_type, clustered)) |
1591 | 79 | return resolved_address | 92 | |
1592 | 93 | return resolved_address | ||
1593 | 80 | 94 | ||
1594 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
1595 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-24 13:40:39 +0000 | |||
1596 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-05 07:18:23 +0000 | |||
1597 | @@ -14,7 +14,7 @@ | |||
1598 | 14 | def headers_package(): | 14 | def headers_package(): |
1599 | 15 | """Ensures correct linux-headers for running kernel are installed, | 15 | """Ensures correct linux-headers for running kernel are installed, |
1600 | 16 | for building DKMS package""" | 16 | for building DKMS package""" |
1602 | 17 | kver = check_output(['uname', '-r']).strip() | 17 | kver = check_output(['uname', '-r']).decode('UTF-8').strip() |
1603 | 18 | return 'linux-headers-%s' % kver | 18 | return 'linux-headers-%s' % kver |
1604 | 19 | 19 | ||
1605 | 20 | QUANTUM_CONF_DIR = '/etc/quantum' | 20 | QUANTUM_CONF_DIR = '/etc/quantum' |
1606 | @@ -22,7 +22,7 @@ | |||
1607 | 22 | 22 | ||
1608 | 23 | def kernel_version(): | 23 | def kernel_version(): |
1609 | 24 | """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ | 24 | """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ |
1611 | 25 | kver = check_output(['uname', '-r']).strip() | 25 | kver = check_output(['uname', '-r']).decode('UTF-8').strip() |
1612 | 26 | kver = kver.split('.') | 26 | kver = kver.split('.') |
1613 | 27 | return (int(kver[0]), int(kver[1])) | 27 | return (int(kver[0]), int(kver[1])) |
1614 | 28 | 28 | ||
1615 | @@ -138,10 +138,25 @@ | |||
1616 | 138 | relation_prefix='neutron', | 138 | relation_prefix='neutron', |
1617 | 139 | ssl_dir=NEUTRON_CONF_DIR)], | 139 | ssl_dir=NEUTRON_CONF_DIR)], |
1618 | 140 | 'services': [], | 140 | 'services': [], |
1620 | 141 | 'packages': [['neutron-plugin-cisco']], | 141 | 'packages': [[headers_package()] + determine_dkms_package(), |
1621 | 142 | ['neutron-plugin-cisco']], | ||
1622 | 142 | 'server_packages': ['neutron-server', | 143 | 'server_packages': ['neutron-server', |
1623 | 143 | 'neutron-plugin-cisco'], | 144 | 'neutron-plugin-cisco'], |
1624 | 144 | 'server_services': ['neutron-server'] | 145 | 'server_services': ['neutron-server'] |
1625 | 146 | }, | ||
1626 | 147 | 'Calico': { | ||
1627 | 148 | 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini', | ||
1628 | 149 | 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin', | ||
1629 | 150 | 'contexts': [ | ||
1630 | 151 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1631 | 152 | database=config('neutron-database'), | ||
1632 | 153 | relation_prefix='neutron', | ||
1633 | 154 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1634 | 155 | 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'], | ||
1635 | 156 | 'packages': [[headers_package()] + determine_dkms_package(), | ||
1636 | 157 | ['calico-compute', 'bird', 'neutron-dhcp-agent']], | ||
1637 | 158 | 'server_packages': ['neutron-server', 'calico-control'], | ||
1638 | 159 | 'server_services': ['neutron-server'] | ||
1639 | 145 | } | 160 | } |
1640 | 146 | } | 161 | } |
1641 | 147 | if release >= 'icehouse': | 162 | if release >= 'icehouse': |
1642 | @@ -162,7 +177,8 @@ | |||
1643 | 162 | elif manager == 'neutron': | 177 | elif manager == 'neutron': |
1644 | 163 | plugins = neutron_plugins() | 178 | plugins = neutron_plugins() |
1645 | 164 | else: | 179 | else: |
1647 | 165 | log('Error: Network manager does not support plugins.') | 180 | log("Network manager '%s' does not support plugins." % (manager), |
1648 | 181 | level=ERROR) | ||
1649 | 166 | raise Exception | 182 | raise Exception |
1650 | 167 | 183 | ||
1651 | 168 | try: | 184 | try: |
1652 | 169 | 185 | ||
1653 | === modified file 'hooks/charmhelpers/contrib/openstack/templating.py' | |||
1654 | --- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-29 07:46:01 +0000 | |||
1655 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-05 07:18:23 +0000 | |||
1656 | @@ -1,13 +1,13 @@ | |||
1657 | 1 | import os | 1 | import os |
1658 | 2 | 2 | ||
1659 | 3 | import six | ||
1660 | 4 | |||
1661 | 3 | from charmhelpers.fetch import apt_install | 5 | from charmhelpers.fetch import apt_install |
1662 | 4 | |||
1663 | 5 | from charmhelpers.core.hookenv import ( | 6 | from charmhelpers.core.hookenv import ( |
1664 | 6 | log, | 7 | log, |
1665 | 7 | ERROR, | 8 | ERROR, |
1666 | 8 | INFO | 9 | INFO |
1667 | 9 | ) | 10 | ) |
1668 | 10 | |||
1669 | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
1670 | 12 | 12 | ||
1671 | 13 | try: | 13 | try: |
1672 | @@ -43,7 +43,7 @@ | |||
1673 | 43 | order by OpenStack release. | 43 | order by OpenStack release. |
1674 | 44 | """ | 44 | """ |
1675 | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
1677 | 46 | for rel in OPENSTACK_CODENAMES.itervalues()] | 46 | for rel in six.itervalues(OPENSTACK_CODENAMES)] |
1678 | 47 | 47 | ||
1679 | 48 | if not os.path.isdir(templates_dir): | 48 | if not os.path.isdir(templates_dir): |
1680 | 49 | log('Templates directory not found @ %s.' % templates_dir, | 49 | log('Templates directory not found @ %s.' % templates_dir, |
1681 | @@ -258,7 +258,7 @@ | |||
1682 | 258 | """ | 258 | """ |
1683 | 259 | Write out all registered config files. | 259 | Write out all registered config files. |
1684 | 260 | """ | 260 | """ |
1686 | 261 | [self.write(k) for k in self.templates.iterkeys()] | 261 | [self.write(k) for k in six.iterkeys(self.templates)] |
1687 | 262 | 262 | ||
1688 | 263 | def set_release(self, openstack_release): | 263 | def set_release(self, openstack_release): |
1689 | 264 | """ | 264 | """ |
1690 | @@ -275,5 +275,5 @@ | |||
1691 | 275 | ''' | 275 | ''' |
1692 | 276 | interfaces = [] | 276 | interfaces = [] |
1693 | 277 | [interfaces.extend(i.complete_contexts()) | 277 | [interfaces.extend(i.complete_contexts()) |
1695 | 278 | for i in self.templates.itervalues()] | 278 | for i in six.itervalues(self.templates)] |
1696 | 279 | return interfaces | 279 | return interfaces |
1697 | 280 | 280 | ||
1698 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
1699 | --- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-07 21:03:47 +0000 | |||
1700 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-05 07:18:23 +0000 | |||
1701 | @@ -2,6 +2,7 @@ | |||
1702 | 2 | 2 | ||
1703 | 3 | # Common python helper functions used for OpenStack charms. | 3 | # Common python helper functions used for OpenStack charms. |
1704 | 4 | from collections import OrderedDict | 4 | from collections import OrderedDict |
1705 | 5 | from functools import wraps | ||
1706 | 5 | 6 | ||
1707 | 6 | import subprocess | 7 | import subprocess |
1708 | 7 | import json | 8 | import json |
1709 | @@ -9,11 +10,13 @@ | |||
1710 | 9 | import socket | 10 | import socket |
1711 | 10 | import sys | 11 | import sys |
1712 | 11 | 12 | ||
1713 | 13 | import six | ||
1714 | 14 | import yaml | ||
1715 | 15 | |||
1716 | 12 | from charmhelpers.core.hookenv import ( | 16 | from charmhelpers.core.hookenv import ( |
1717 | 13 | config, | 17 | config, |
1718 | 14 | log as juju_log, | 18 | log as juju_log, |
1719 | 15 | charm_dir, | 19 | charm_dir, |
1720 | 16 | ERROR, | ||
1721 | 17 | INFO, | 20 | INFO, |
1722 | 18 | relation_ids, | 21 | relation_ids, |
1723 | 19 | relation_set | 22 | relation_set |
1724 | @@ -30,7 +33,8 @@ | |||
1725 | 30 | ) | 33 | ) |
1726 | 31 | 34 | ||
1727 | 32 | from charmhelpers.core.host import lsb_release, mounts, umount | 35 | from charmhelpers.core.host import lsb_release, mounts, umount |
1729 | 33 | from charmhelpers.fetch import apt_install, apt_cache | 36 | from charmhelpers.fetch import apt_install, apt_cache, install_remote |
1730 | 37 | from charmhelpers.contrib.python.packages import pip_install | ||
1731 | 34 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk | 38 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
1732 | 35 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device | 39 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
1733 | 36 | 40 | ||
1734 | @@ -112,7 +116,7 @@ | |||
1735 | 112 | 116 | ||
1736 | 113 | # Best guess match based on deb string provided | 117 | # Best guess match based on deb string provided |
1737 | 114 | if src.startswith('deb') or src.startswith('ppa'): | 118 | if src.startswith('deb') or src.startswith('ppa'): |
1739 | 115 | for k, v in OPENSTACK_CODENAMES.iteritems(): | 119 | for k, v in six.iteritems(OPENSTACK_CODENAMES): |
1740 | 116 | if v in src: | 120 | if v in src: |
1741 | 117 | return v | 121 | return v |
1742 | 118 | 122 | ||
1743 | @@ -133,7 +137,7 @@ | |||
1744 | 133 | 137 | ||
1745 | 134 | def get_os_version_codename(codename): | 138 | def get_os_version_codename(codename): |
1746 | 135 | '''Determine OpenStack version number from codename.''' | 139 | '''Determine OpenStack version number from codename.''' |
1748 | 136 | for k, v in OPENSTACK_CODENAMES.iteritems(): | 140 | for k, v in six.iteritems(OPENSTACK_CODENAMES): |
1749 | 137 | if v == codename: | 141 | if v == codename: |
1750 | 138 | return k | 142 | return k |
1751 | 139 | e = 'Could not derive OpenStack version for '\ | 143 | e = 'Could not derive OpenStack version for '\ |
1752 | @@ -193,7 +197,7 @@ | |||
1753 | 193 | else: | 197 | else: |
1754 | 194 | vers_map = OPENSTACK_CODENAMES | 198 | vers_map = OPENSTACK_CODENAMES |
1755 | 195 | 199 | ||
1757 | 196 | for version, cname in vers_map.iteritems(): | 200 | for version, cname in six.iteritems(vers_map): |
1758 | 197 | if cname == codename: | 201 | if cname == codename: |
1759 | 198 | return version | 202 | return version |
1760 | 199 | # e = "Could not determine OpenStack version for package: %s" % pkg | 203 | # e = "Could not determine OpenStack version for package: %s" % pkg |
1761 | @@ -317,7 +321,7 @@ | |||
1762 | 317 | rc_script.write( | 321 | rc_script.write( |
1763 | 318 | "#!/bin/bash\n") | 322 | "#!/bin/bash\n") |
1764 | 319 | [rc_script.write('export %s=%s\n' % (u, p)) | 323 | [rc_script.write('export %s=%s\n' % (u, p)) |
1766 | 320 | for u, p in env_vars.iteritems() if u != "script_path"] | 324 | for u, p in six.iteritems(env_vars) if u != "script_path"] |
1767 | 321 | 325 | ||
1768 | 322 | 326 | ||
1769 | 323 | def openstack_upgrade_available(package): | 327 | def openstack_upgrade_available(package): |
1770 | @@ -350,8 +354,8 @@ | |||
1771 | 350 | ''' | 354 | ''' |
1772 | 351 | _none = ['None', 'none', None] | 355 | _none = ['None', 'none', None] |
1773 | 352 | if (block_device in _none): | 356 | if (block_device in _none): |
1776 | 353 | error_out('prepare_storage(): Missing required input: ' | 357 | error_out('prepare_storage(): Missing required input: block_device=%s.' |
1777 | 354 | 'block_device=%s.' % block_device, level=ERROR) | 358 | % block_device) |
1778 | 355 | 359 | ||
1779 | 356 | if block_device.startswith('/dev/'): | 360 | if block_device.startswith('/dev/'): |
1780 | 357 | bdev = block_device | 361 | bdev = block_device |
1781 | @@ -367,8 +371,7 @@ | |||
1782 | 367 | bdev = '/dev/%s' % block_device | 371 | bdev = '/dev/%s' % block_device |
1783 | 368 | 372 | ||
1784 | 369 | if not is_block_device(bdev): | 373 | if not is_block_device(bdev): |
1787 | 370 | error_out('Failed to locate valid block device at %s' % bdev, | 374 | error_out('Failed to locate valid block device at %s' % bdev) |
1786 | 371 | level=ERROR) | ||
1788 | 372 | 375 | ||
1789 | 373 | return bdev | 376 | return bdev |
1790 | 374 | 377 | ||
1791 | @@ -417,7 +420,7 @@ | |||
1792 | 417 | 420 | ||
1793 | 418 | if isinstance(address, dns.name.Name): | 421 | if isinstance(address, dns.name.Name): |
1794 | 419 | rtype = 'PTR' | 422 | rtype = 'PTR' |
1796 | 420 | elif isinstance(address, basestring): | 423 | elif isinstance(address, six.string_types): |
1797 | 421 | rtype = 'A' | 424 | rtype = 'A' |
1798 | 422 | else: | 425 | else: |
1799 | 423 | return None | 426 | return None |
1800 | @@ -468,6 +471,14 @@ | |||
1801 | 468 | return result.split('.')[0] | 471 | return result.split('.')[0] |
1802 | 469 | 472 | ||
1803 | 470 | 473 | ||
1804 | 474 | def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): | ||
1805 | 475 | mm_map = {} | ||
1806 | 476 | if os.path.isfile(mm_file): | ||
1807 | 477 | with open(mm_file, 'r') as f: | ||
1808 | 478 | mm_map = json.load(f) | ||
1809 | 479 | return mm_map | ||
1810 | 480 | |||
1811 | 481 | |||
1812 | 471 | def sync_db_with_multi_ipv6_addresses(database, database_user, | 482 | def sync_db_with_multi_ipv6_addresses(database, database_user, |
1813 | 472 | relation_prefix=None): | 483 | relation_prefix=None): |
1814 | 473 | hosts = get_ipv6_addr(dynamic_only=False) | 484 | hosts = get_ipv6_addr(dynamic_only=False) |
1815 | @@ -477,10 +488,132 @@ | |||
1816 | 477 | 'hostname': json.dumps(hosts)} | 488 | 'hostname': json.dumps(hosts)} |
1817 | 478 | 489 | ||
1818 | 479 | if relation_prefix: | 490 | if relation_prefix: |
1821 | 480 | keys = kwargs.keys() | 491 | for key in list(kwargs.keys()): |
1820 | 481 | for key in keys: | ||
1822 | 482 | kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] | 492 | kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] |
1823 | 483 | del kwargs[key] | 493 | del kwargs[key] |
1824 | 484 | 494 | ||
1825 | 485 | for rid in relation_ids('shared-db'): | 495 | for rid in relation_ids('shared-db'): |
1826 | 486 | relation_set(relation_id=rid, **kwargs) | 496 | relation_set(relation_id=rid, **kwargs) |
1827 | 497 | |||
1828 | 498 | |||
1829 | 499 | def os_requires_version(ostack_release, pkg): | ||
1830 | 500 | """ | ||
1831 | 501 | Decorator for hook to specify minimum supported release | ||
1832 | 502 | """ | ||
1833 | 503 | def wrap(f): | ||
1834 | 504 | @wraps(f) | ||
1835 | 505 | def wrapped_f(*args): | ||
1836 | 506 | if os_release(pkg) < ostack_release: | ||
1837 | 507 | raise Exception("This hook is not supported on releases" | ||
1838 | 508 | " before %s" % ostack_release) | ||
1839 | 509 | f(*args) | ||
1840 | 510 | return wrapped_f | ||
1841 | 511 | return wrap | ||
1842 | 512 | |||
1843 | 513 | |||
1844 | 514 | def git_install_requested(): | ||
1845 | 515 | """Returns true if openstack-origin-git is specified.""" | ||
1846 | 516 | return config('openstack-origin-git') != "None" | ||
1847 | 517 | |||
1848 | 518 | |||
1849 | 519 | requirements_dir = None | ||
1850 | 520 | |||
1851 | 521 | |||
1852 | 522 | def git_clone_and_install(file_name, core_project): | ||
1853 | 523 | """Clone/install all OpenStack repos specified in yaml config file.""" | ||
1854 | 524 | global requirements_dir | ||
1855 | 525 | |||
1856 | 526 | if file_name == "None": | ||
1857 | 527 | return | ||
1858 | 528 | |||
1859 | 529 | yaml_file = os.path.join(charm_dir(), file_name) | ||
1860 | 530 | |||
1861 | 531 | # clone/install the requirements project first | ||
1862 | 532 | installed = _git_clone_and_install_subset(yaml_file, | ||
1863 | 533 | whitelist=['requirements']) | ||
1864 | 534 | if 'requirements' not in installed: | ||
1865 | 535 | error_out('requirements git repository must be specified') | ||
1866 | 536 | |||
1867 | 537 | # clone/install all other projects except requirements and the core project | ||
1868 | 538 | blacklist = ['requirements', core_project] | ||
1869 | 539 | _git_clone_and_install_subset(yaml_file, blacklist=blacklist, | ||
1870 | 540 | update_requirements=True) | ||
1871 | 541 | |||
1872 | 542 | # clone/install the core project | ||
1873 | 543 | whitelist = [core_project] | ||
1874 | 544 | installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist, | ||
1875 | 545 | update_requirements=True) | ||
1876 | 546 | if core_project not in installed: | ||
1877 | 547 | error_out('{} git repository must be specified'.format(core_project)) | ||
1878 | 548 | |||
1879 | 549 | |||
1880 | 550 | def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[], | ||
1881 | 551 | update_requirements=False): | ||
1882 | 552 | """Clone/install subset of OpenStack repos specified in yaml config file.""" | ||
1883 | 553 | global requirements_dir | ||
1884 | 554 | installed = [] | ||
1885 | 555 | |||
1886 | 556 | with open(yaml_file, 'r') as fd: | ||
1887 | 557 | projects = yaml.load(fd) | ||
1888 | 558 | for proj, val in projects.items(): | ||
1889 | 559 | # The project subset is chosen based on the following 3 rules: | ||
1890 | 560 | # 1) If project is in blacklist, we don't clone/install it, period. | ||
1891 | 561 | # 2) If whitelist is empty, we clone/install everything else. | ||
1892 | 562 | # 3) If whitelist is not empty, we clone/install everything in the | ||
1893 | 563 | # whitelist. | ||
1894 | 564 | if proj in blacklist: | ||
1895 | 565 | continue | ||
1896 | 566 | if whitelist and proj not in whitelist: | ||
1897 | 567 | continue | ||
1898 | 568 | repo = val['repository'] | ||
1899 | 569 | branch = val['branch'] | ||
1900 | 570 | repo_dir = _git_clone_and_install_single(repo, branch, | ||
1901 | 571 | update_requirements) | ||
1902 | 572 | if proj == 'requirements': | ||
1903 | 573 | requirements_dir = repo_dir | ||
1904 | 574 | installed.append(proj) | ||
1905 | 575 | return installed | ||
1906 | 576 | |||
1907 | 577 | |||
1908 | 578 | def _git_clone_and_install_single(repo, branch, update_requirements=False): | ||
1909 | 579 | """Clone and install a single git repository.""" | ||
1910 | 580 | dest_parent_dir = "/mnt/openstack-git/" | ||
1911 | 581 | dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo)) | ||
1912 | 582 | |||
1913 | 583 | if not os.path.exists(dest_parent_dir): | ||
1914 | 584 | juju_log('Host dir not mounted at {}. ' | ||
1915 | 585 | 'Creating directory there instead.'.format(dest_parent_dir)) | ||
1916 | 586 | os.mkdir(dest_parent_dir) | ||
1917 | 587 | |||
1918 | 588 | if not os.path.exists(dest_dir): | ||
1919 | 589 | juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch)) | ||
1920 | 590 | repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch) | ||
1921 | 591 | else: | ||
1922 | 592 | repo_dir = dest_dir | ||
1923 | 593 | |||
1924 | 594 | if update_requirements: | ||
1925 | 595 | if not requirements_dir: | ||
1926 | 596 | error_out('requirements repo must be cloned before ' | ||
1927 | 597 | 'updating from global requirements.') | ||
1928 | 598 | _git_update_requirements(repo_dir, requirements_dir) | ||
1929 | 599 | |||
1930 | 600 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) | ||
1931 | 601 | pip_install(repo_dir) | ||
1932 | 602 | |||
1933 | 603 | return repo_dir | ||
1934 | 604 | |||
1935 | 605 | |||
1936 | 606 | def _git_update_requirements(package_dir, reqs_dir): | ||
1937 | 607 | """Update from global requirements. | ||
1938 | 608 | |||
1939 | 609 | Update an OpenStack git directory's requirements.txt and | ||
1940 | 610 | test-requirements.txt from global-requirements.txt.""" | ||
1941 | 611 | orig_dir = os.getcwd() | ||
1942 | 612 | os.chdir(reqs_dir) | ||
1943 | 613 | cmd = "python update.py {}".format(package_dir) | ||
1944 | 614 | try: | ||
1945 | 615 | subprocess.check_call(cmd.split(' ')) | ||
1946 | 616 | except subprocess.CalledProcessError: | ||
1947 | 617 | package = os.path.basename(package_dir) | ||
1948 | 618 | error_out("Error updating {} from global-requirements.txt".format(package)) | ||
1949 | 619 | os.chdir(orig_dir) | ||
1950 | 487 | 620 | ||
1951 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
1952 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-29 07:46:01 +0000 | |||
1953 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-05 07:18:23 +0000 | |||
1954 | @@ -16,19 +16,18 @@ | |||
1955 | 16 | from subprocess import ( | 16 | from subprocess import ( |
1956 | 17 | check_call, | 17 | check_call, |
1957 | 18 | check_output, | 18 | check_output, |
1959 | 19 | CalledProcessError | 19 | CalledProcessError, |
1960 | 20 | ) | 20 | ) |
1961 | 21 | |||
1962 | 22 | from charmhelpers.core.hookenv import ( | 21 | from charmhelpers.core.hookenv import ( |
1963 | 23 | relation_get, | 22 | relation_get, |
1964 | 24 | relation_ids, | 23 | relation_ids, |
1965 | 25 | related_units, | 24 | related_units, |
1966 | 26 | log, | 25 | log, |
1967 | 26 | DEBUG, | ||
1968 | 27 | INFO, | 27 | INFO, |
1969 | 28 | WARNING, | 28 | WARNING, |
1971 | 29 | ERROR | 29 | ERROR, |
1972 | 30 | ) | 30 | ) |
1973 | 31 | |||
1974 | 32 | from charmhelpers.core.host import ( | 31 | from charmhelpers.core.host import ( |
1975 | 33 | mount, | 32 | mount, |
1976 | 34 | mounts, | 33 | mounts, |
1977 | @@ -37,7 +36,6 @@ | |||
1978 | 37 | service_running, | 36 | service_running, |
1979 | 38 | umount, | 37 | umount, |
1980 | 39 | ) | 38 | ) |
1981 | 40 | |||
1982 | 41 | from charmhelpers.fetch import ( | 39 | from charmhelpers.fetch import ( |
1983 | 42 | apt_install, | 40 | apt_install, |
1984 | 43 | ) | 41 | ) |
1985 | @@ -56,99 +54,85 @@ | |||
1986 | 56 | 54 | ||
1987 | 57 | 55 | ||
1988 | 58 | def install(): | 56 | def install(): |
1990 | 59 | ''' Basic Ceph client installation ''' | 57 | """Basic Ceph client installation.""" |
1991 | 60 | ceph_dir = "/etc/ceph" | 58 | ceph_dir = "/etc/ceph" |
1992 | 61 | if not os.path.exists(ceph_dir): | 59 | if not os.path.exists(ceph_dir): |
1993 | 62 | os.mkdir(ceph_dir) | 60 | os.mkdir(ceph_dir) |
1994 | 61 | |||
1995 | 63 | apt_install('ceph-common', fatal=True) | 62 | apt_install('ceph-common', fatal=True) |
1996 | 64 | 63 | ||
1997 | 65 | 64 | ||
1998 | 66 | def rbd_exists(service, pool, rbd_img): | 65 | def rbd_exists(service, pool, rbd_img): |
2000 | 67 | ''' Check to see if a RADOS block device exists ''' | 66 | """Check to see if a RADOS block device exists.""" |
2001 | 68 | try: | 67 | try: |
2004 | 69 | out = check_output(['rbd', 'list', '--id', service, | 68 | out = check_output(['rbd', 'list', '--id', |
2005 | 70 | '--pool', pool]) | 69 | service, '--pool', pool]).decode('UTF-8') |
2006 | 71 | except CalledProcessError: | 70 | except CalledProcessError: |
2007 | 72 | return False | 71 | return False |
2010 | 73 | else: | 72 | |
2011 | 74 | return rbd_img in out | 73 | return rbd_img in out |
2012 | 75 | 74 | ||
2013 | 76 | 75 | ||
2014 | 77 | def create_rbd_image(service, pool, image, sizemb): | 76 | def create_rbd_image(service, pool, image, sizemb): |
2027 | 78 | ''' Create a new RADOS block device ''' | 77 | """Create a new RADOS block device.""" |
2028 | 79 | cmd = [ | 78 | cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, |
2029 | 80 | 'rbd', | 79 | '--pool', pool] |
2018 | 81 | 'create', | ||
2019 | 82 | image, | ||
2020 | 83 | '--size', | ||
2021 | 84 | str(sizemb), | ||
2022 | 85 | '--id', | ||
2023 | 86 | service, | ||
2024 | 87 | '--pool', | ||
2025 | 88 | pool | ||
2026 | 89 | ] | ||
2030 | 90 | check_call(cmd) | 80 | check_call(cmd) |
2031 | 91 | 81 | ||
2032 | 92 | 82 | ||
2033 | 93 | def pool_exists(service, name): | 83 | def pool_exists(service, name): |
2035 | 94 | ''' Check to see if a RADOS pool already exists ''' | 84 | """Check to see if a RADOS pool already exists.""" |
2036 | 95 | try: | 85 | try: |
2038 | 96 | out = check_output(['rados', '--id', service, 'lspools']) | 86 | out = check_output(['rados', '--id', service, |
2039 | 87 | 'lspools']).decode('UTF-8') | ||
2040 | 97 | except CalledProcessError: | 88 | except CalledProcessError: |
2041 | 98 | return False | 89 | return False |
2044 | 99 | else: | 90 | |
2045 | 100 | return name in out | 91 | return name in out |
2046 | 101 | 92 | ||
2047 | 102 | 93 | ||
2048 | 103 | def get_osds(service): | 94 | def get_osds(service): |
2053 | 104 | ''' | 95 | """Return a list of all Ceph Object Storage Daemons currently in the |
2054 | 105 | Return a list of all Ceph Object Storage Daemons | 96 | cluster. |
2055 | 106 | currently in the cluster | 97 | """ |
2052 | 107 | ''' | ||
2056 | 108 | version = ceph_version() | 98 | version = ceph_version() |
2057 | 109 | if version and version >= '0.56': | 99 | if version and version >= '0.56': |
2058 | 110 | return json.loads(check_output(['ceph', '--id', service, | 100 | return json.loads(check_output(['ceph', '--id', service, |
2066 | 111 | 'osd', 'ls', '--format=json'])) | 101 | 'osd', 'ls', |
2067 | 112 | else: | 102 | '--format=json']).decode('UTF-8')) |
2068 | 113 | return None | 103 | |
2069 | 114 | 104 | return None | |
2070 | 115 | 105 | ||
2071 | 116 | def create_pool(service, name, replicas=2): | 106 | |
2072 | 117 | ''' Create a new RADOS pool ''' | 107 | def create_pool(service, name, replicas=3): |
2073 | 108 | """Create a new RADOS pool.""" | ||
2074 | 118 | if pool_exists(service, name): | 109 | if pool_exists(service, name): |
2075 | 119 | log("Ceph pool {} already exists, skipping creation".format(name), | 110 | log("Ceph pool {} already exists, skipping creation".format(name), |
2076 | 120 | level=WARNING) | 111 | level=WARNING) |
2077 | 121 | return | 112 | return |
2078 | 113 | |||
2079 | 122 | # Calculate the number of placement groups based | 114 | # Calculate the number of placement groups based |
2080 | 123 | # on upstream recommended best practices. | 115 | # on upstream recommended best practices. |
2081 | 124 | osds = get_osds(service) | 116 | osds = get_osds(service) |
2082 | 125 | if osds: | 117 | if osds: |
2084 | 126 | pgnum = (len(osds) * 100 / replicas) | 118 | pgnum = (len(osds) * 100 // replicas) |
2085 | 127 | else: | 119 | else: |
2086 | 128 | # NOTE(james-page): Default to 200 for older ceph versions | 120 | # NOTE(james-page): Default to 200 for older ceph versions |
2087 | 129 | # which don't support OSD query from cli | 121 | # which don't support OSD query from cli |
2088 | 130 | pgnum = 200 | 122 | pgnum = 200 |
2094 | 131 | cmd = [ | 123 | |
2095 | 132 | 'ceph', '--id', service, | 124 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)] |
2091 | 133 | 'osd', 'pool', 'create', | ||
2092 | 134 | name, str(pgnum) | ||
2093 | 135 | ] | ||
2096 | 136 | check_call(cmd) | 125 | check_call(cmd) |
2102 | 137 | cmd = [ | 126 | |
2103 | 138 | 'ceph', '--id', service, | 127 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size', |
2104 | 139 | 'osd', 'pool', 'set', name, | 128 | str(replicas)] |
2100 | 140 | 'size', str(replicas) | ||
2101 | 141 | ] | ||
2105 | 142 | check_call(cmd) | 129 | check_call(cmd) |
2106 | 143 | 130 | ||
2107 | 144 | 131 | ||
2108 | 145 | def delete_pool(service, name): | 132 | def delete_pool(service, name): |
2115 | 146 | ''' Delete a RADOS pool from ceph ''' | 133 | """Delete a RADOS pool from ceph.""" |
2116 | 147 | cmd = [ | 134 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name, |
2117 | 148 | 'ceph', '--id', service, | 135 | '--yes-i-really-really-mean-it'] |
2112 | 149 | 'osd', 'pool', 'delete', | ||
2113 | 150 | name, '--yes-i-really-really-mean-it' | ||
2114 | 151 | ] | ||
2118 | 152 | check_call(cmd) | 136 | check_call(cmd) |
2119 | 153 | 137 | ||
2120 | 154 | 138 | ||
2121 | @@ -161,44 +145,43 @@ | |||
2122 | 161 | 145 | ||
2123 | 162 | 146 | ||
2124 | 163 | def create_keyring(service, key): | 147 | def create_keyring(service, key): |
2126 | 164 | ''' Create a new Ceph keyring containing key''' | 148 | """Create a new Ceph keyring containing key.""" |
2127 | 165 | keyring = _keyring_path(service) | 149 | keyring = _keyring_path(service) |
2128 | 166 | if os.path.exists(keyring): | 150 | if os.path.exists(keyring): |
2130 | 167 | log('ceph: Keyring exists at %s.' % keyring, level=WARNING) | 151 | log('Ceph keyring exists at %s.' % keyring, level=WARNING) |
2131 | 168 | return | 152 | return |
2139 | 169 | cmd = [ | 153 | |
2140 | 170 | 'ceph-authtool', | 154 | cmd = ['ceph-authtool', keyring, '--create-keyring', |
2141 | 171 | keyring, | 155 | '--name=client.{}'.format(service), '--add-key={}'.format(key)] |
2135 | 172 | '--create-keyring', | ||
2136 | 173 | '--name=client.{}'.format(service), | ||
2137 | 174 | '--add-key={}'.format(key) | ||
2138 | 175 | ] | ||
2142 | 176 | check_call(cmd) | 156 | check_call(cmd) |
2144 | 177 | log('ceph: Created new ring at %s.' % keyring, level=INFO) | 157 | log('Created new ceph keyring at %s.' % keyring, level=DEBUG) |
2145 | 178 | 158 | ||
2146 | 179 | 159 | ||
2147 | 180 | def create_key_file(service, key): | 160 | def create_key_file(service, key): |
2149 | 181 | ''' Create a file containing key ''' | 161 | """Create a file containing key.""" |
2150 | 182 | keyfile = _keyfile_path(service) | 162 | keyfile = _keyfile_path(service) |
2151 | 183 | if os.path.exists(keyfile): | 163 | if os.path.exists(keyfile): |
2153 | 184 | log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) | 164 | log('Keyfile exists at %s.' % keyfile, level=WARNING) |
2154 | 185 | return | 165 | return |
2155 | 166 | |||
2156 | 186 | with open(keyfile, 'w') as fd: | 167 | with open(keyfile, 'w') as fd: |
2157 | 187 | fd.write(key) | 168 | fd.write(key) |
2159 | 188 | log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) | 169 | |
2160 | 170 | log('Created new keyfile at %s.' % keyfile, level=INFO) | ||
2161 | 189 | 171 | ||
2162 | 190 | 172 | ||
2163 | 191 | def get_ceph_nodes(): | 173 | def get_ceph_nodes(): |
2165 | 192 | ''' Query named relation 'ceph' to detemine current nodes ''' | 174 | """Query named relation 'ceph' to determine current nodes.""" |
2166 | 193 | hosts = [] | 175 | hosts = [] |
2167 | 194 | for r_id in relation_ids('ceph'): | 176 | for r_id in relation_ids('ceph'): |
2168 | 195 | for unit in related_units(r_id): | 177 | for unit in related_units(r_id): |
2169 | 196 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) | 178 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
2170 | 179 | |||
2171 | 197 | return hosts | 180 | return hosts |
2172 | 198 | 181 | ||
2173 | 199 | 182 | ||
2174 | 200 | def configure(service, key, auth, use_syslog): | 183 | def configure(service, key, auth, use_syslog): |
2176 | 201 | ''' Perform basic configuration of Ceph ''' | 184 | """Perform basic configuration of Ceph.""" |
2177 | 202 | create_keyring(service, key) | 185 | create_keyring(service, key) |
2178 | 203 | create_key_file(service, key) | 186 | create_key_file(service, key) |
2179 | 204 | hosts = get_ceph_nodes() | 187 | hosts = get_ceph_nodes() |
2180 | @@ -211,17 +194,17 @@ | |||
2181 | 211 | 194 | ||
2182 | 212 | 195 | ||
2183 | 213 | def image_mapped(name): | 196 | def image_mapped(name): |
2185 | 214 | ''' Determine whether a RADOS block device is mapped locally ''' | 197 | """Determine whether a RADOS block device is mapped locally.""" |
2186 | 215 | try: | 198 | try: |
2188 | 216 | out = check_output(['rbd', 'showmapped']) | 199 | out = check_output(['rbd', 'showmapped']).decode('UTF-8') |
2189 | 217 | except CalledProcessError: | 200 | except CalledProcessError: |
2190 | 218 | return False | 201 | return False |
2193 | 219 | else: | 202 | |
2194 | 220 | return name in out | 203 | return name in out |
2195 | 221 | 204 | ||
2196 | 222 | 205 | ||
2197 | 223 | def map_block_storage(service, pool, image): | 206 | def map_block_storage(service, pool, image): |
2199 | 224 | ''' Map a RADOS block device for local use ''' | 207 | """Map a RADOS block device for local use.""" |
2200 | 225 | cmd = [ | 208 | cmd = [ |
2201 | 226 | 'rbd', | 209 | 'rbd', |
2202 | 227 | 'map', | 210 | 'map', |
2203 | @@ -235,31 +218,32 @@ | |||
2204 | 235 | 218 | ||
2205 | 236 | 219 | ||
2206 | 237 | def filesystem_mounted(fs): | 220 | def filesystem_mounted(fs): |
2208 | 238 | ''' Determine whether a filesytems is already mounted ''' | 221 | """Determine whether a filesytems is already mounted.""" |
2209 | 239 | return fs in [f for f, m in mounts()] | 222 | return fs in [f for f, m in mounts()] |
2210 | 240 | 223 | ||
2211 | 241 | 224 | ||
2212 | 242 | def make_filesystem(blk_device, fstype='ext4', timeout=10): | 225 | def make_filesystem(blk_device, fstype='ext4', timeout=10): |
2214 | 243 | ''' Make a new filesystem on the specified block device ''' | 226 | """Make a new filesystem on the specified block device.""" |
2215 | 244 | count = 0 | 227 | count = 0 |
2216 | 245 | e_noent = os.errno.ENOENT | 228 | e_noent = os.errno.ENOENT |
2217 | 246 | while not os.path.exists(blk_device): | 229 | while not os.path.exists(blk_device): |
2218 | 247 | if count >= timeout: | 230 | if count >= timeout: |
2220 | 248 | log('ceph: gave up waiting on block device %s' % blk_device, | 231 | log('Gave up waiting on block device %s' % blk_device, |
2221 | 249 | level=ERROR) | 232 | level=ERROR) |
2222 | 250 | raise IOError(e_noent, os.strerror(e_noent), blk_device) | 233 | raise IOError(e_noent, os.strerror(e_noent), blk_device) |
2225 | 251 | log('ceph: waiting for block device %s to appear' % blk_device, | 234 | |
2226 | 252 | level=INFO) | 235 | log('Waiting for block device %s to appear' % blk_device, |
2227 | 236 | level=DEBUG) | ||
2228 | 253 | count += 1 | 237 | count += 1 |
2229 | 254 | time.sleep(1) | 238 | time.sleep(1) |
2230 | 255 | else: | 239 | else: |
2232 | 256 | log('ceph: Formatting block device %s as filesystem %s.' % | 240 | log('Formatting block device %s as filesystem %s.' % |
2233 | 257 | (blk_device, fstype), level=INFO) | 241 | (blk_device, fstype), level=INFO) |
2234 | 258 | check_call(['mkfs', '-t', fstype, blk_device]) | 242 | check_call(['mkfs', '-t', fstype, blk_device]) |
2235 | 259 | 243 | ||
2236 | 260 | 244 | ||
2237 | 261 | def place_data_on_block_device(blk_device, data_src_dst): | 245 | def place_data_on_block_device(blk_device, data_src_dst): |
2239 | 262 | ''' Migrate data in data_src_dst to blk_device and then remount ''' | 246 | """Migrate data in data_src_dst to blk_device and then remount.""" |
2240 | 263 | # mount block device into /mnt | 247 | # mount block device into /mnt |
2241 | 264 | mount(blk_device, '/mnt') | 248 | mount(blk_device, '/mnt') |
2242 | 265 | # copy data to /mnt | 249 | # copy data to /mnt |
2243 | @@ -279,8 +263,8 @@ | |||
2244 | 279 | 263 | ||
2245 | 280 | # TODO: re-use | 264 | # TODO: re-use |
2246 | 281 | def modprobe(module): | 265 | def modprobe(module): |
2249 | 282 | ''' Load a kernel module and configure for auto-load on reboot ''' | 266 | """Load a kernel module and configure for auto-load on reboot.""" |
2250 | 283 | log('ceph: Loading kernel module', level=INFO) | 267 | log('Loading kernel module', level=INFO) |
2251 | 284 | cmd = ['modprobe', module] | 268 | cmd = ['modprobe', module] |
2252 | 285 | check_call(cmd) | 269 | check_call(cmd) |
2253 | 286 | with open('/etc/modules', 'r+') as modules: | 270 | with open('/etc/modules', 'r+') as modules: |
2254 | @@ -289,7 +273,7 @@ | |||
2255 | 289 | 273 | ||
2256 | 290 | 274 | ||
2257 | 291 | def copy_files(src, dst, symlinks=False, ignore=None): | 275 | def copy_files(src, dst, symlinks=False, ignore=None): |
2259 | 292 | ''' Copy files from src to dst ''' | 276 | """Copy files from src to dst.""" |
2260 | 293 | for item in os.listdir(src): | 277 | for item in os.listdir(src): |
2261 | 294 | s = os.path.join(src, item) | 278 | s = os.path.join(src, item) |
2262 | 295 | d = os.path.join(dst, item) | 279 | d = os.path.join(dst, item) |
2263 | @@ -300,9 +284,9 @@ | |||
2264 | 300 | 284 | ||
2265 | 301 | 285 | ||
2266 | 302 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | 286 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
2270 | 303 | blk_device, fstype, system_services=[]): | 287 | blk_device, fstype, system_services=[], |
2271 | 304 | """ | 288 | replicas=3): |
2272 | 305 | NOTE: This function must only be called from a single service unit for | 289 | """NOTE: This function must only be called from a single service unit for |
2273 | 306 | the same rbd_img otherwise data loss will occur. | 290 | the same rbd_img otherwise data loss will occur. |
2274 | 307 | 291 | ||
2275 | 308 | Ensures given pool and RBD image exists, is mapped to a block device, | 292 | Ensures given pool and RBD image exists, is mapped to a block device, |
2276 | @@ -316,15 +300,16 @@ | |||
2277 | 316 | """ | 300 | """ |
2278 | 317 | # Ensure pool, RBD image, RBD mappings are in place. | 301 | # Ensure pool, RBD image, RBD mappings are in place. |
2279 | 318 | if not pool_exists(service, pool): | 302 | if not pool_exists(service, pool): |
2282 | 319 | log('ceph: Creating new pool {}.'.format(pool)) | 303 | log('Creating new pool {}.'.format(pool), level=INFO) |
2283 | 320 | create_pool(service, pool) | 304 | create_pool(service, pool, replicas=replicas) |
2284 | 321 | 305 | ||
2285 | 322 | if not rbd_exists(service, pool, rbd_img): | 306 | if not rbd_exists(service, pool, rbd_img): |
2287 | 323 | log('ceph: Creating RBD image ({}).'.format(rbd_img)) | 307 | log('Creating RBD image ({}).'.format(rbd_img), level=INFO) |
2288 | 324 | create_rbd_image(service, pool, rbd_img, sizemb) | 308 | create_rbd_image(service, pool, rbd_img, sizemb) |
2289 | 325 | 309 | ||
2290 | 326 | if not image_mapped(rbd_img): | 310 | if not image_mapped(rbd_img): |
2292 | 327 | log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) | 311 | log('Mapping RBD Image {} as a Block Device.'.format(rbd_img), |
2293 | 312 | level=INFO) | ||
2294 | 328 | map_block_storage(service, pool, rbd_img) | 313 | map_block_storage(service, pool, rbd_img) |
2295 | 329 | 314 | ||
2296 | 330 | # make file system | 315 | # make file system |
2297 | @@ -339,45 +324,47 @@ | |||
2298 | 339 | 324 | ||
2299 | 340 | for svc in system_services: | 325 | for svc in system_services: |
2300 | 341 | if service_running(svc): | 326 | if service_running(svc): |
2303 | 342 | log('ceph: Stopping services {} prior to migrating data.' | 327 | log('Stopping services {} prior to migrating data.' |
2304 | 343 | .format(svc)) | 328 | .format(svc), level=DEBUG) |
2305 | 344 | service_stop(svc) | 329 | service_stop(svc) |
2306 | 345 | 330 | ||
2307 | 346 | place_data_on_block_device(blk_device, mount_point) | 331 | place_data_on_block_device(blk_device, mount_point) |
2308 | 347 | 332 | ||
2309 | 348 | for svc in system_services: | 333 | for svc in system_services: |
2312 | 349 | log('ceph: Starting service {} after migrating data.' | 334 | log('Starting service {} after migrating data.' |
2313 | 350 | .format(svc)) | 335 | .format(svc), level=DEBUG) |
2314 | 351 | service_start(svc) | 336 | service_start(svc) |
2315 | 352 | 337 | ||
2316 | 353 | 338 | ||
2317 | 354 | def ensure_ceph_keyring(service, user=None, group=None): | 339 | def ensure_ceph_keyring(service, user=None, group=None): |
2321 | 355 | ''' | 340 | """Ensures a ceph keyring is created for a named service and optionally |
2322 | 356 | Ensures a ceph keyring is created for a named service | 341 | ensures user and group ownership. |
2320 | 357 | and optionally ensures user and group ownership. | ||
2323 | 358 | 342 | ||
2324 | 359 | Returns False if no ceph key is available in relation state. | 343 | Returns False if no ceph key is available in relation state. |
2326 | 360 | ''' | 344 | """ |
2327 | 361 | key = None | 345 | key = None |
2328 | 362 | for rid in relation_ids('ceph'): | 346 | for rid in relation_ids('ceph'): |
2329 | 363 | for unit in related_units(rid): | 347 | for unit in related_units(rid): |
2330 | 364 | key = relation_get('key', rid=rid, unit=unit) | 348 | key = relation_get('key', rid=rid, unit=unit) |
2331 | 365 | if key: | 349 | if key: |
2332 | 366 | break | 350 | break |
2333 | 351 | |||
2334 | 367 | if not key: | 352 | if not key: |
2335 | 368 | return False | 353 | return False |
2336 | 354 | |||
2337 | 369 | create_keyring(service=service, key=key) | 355 | create_keyring(service=service, key=key) |
2338 | 370 | keyring = _keyring_path(service) | 356 | keyring = _keyring_path(service) |
2339 | 371 | if user and group: | 357 | if user and group: |
2340 | 372 | check_call(['chown', '%s.%s' % (user, group), keyring]) | 358 | check_call(['chown', '%s.%s' % (user, group), keyring]) |
2341 | 359 | |||
2342 | 373 | return True | 360 | return True |
2343 | 374 | 361 | ||
2344 | 375 | 362 | ||
2345 | 376 | def ceph_version(): | 363 | def ceph_version(): |
2347 | 377 | ''' Retrieve the local version of ceph ''' | 364 | """Retrieve the local version of ceph.""" |
2348 | 378 | if os.path.exists('/usr/bin/ceph'): | 365 | if os.path.exists('/usr/bin/ceph'): |
2349 | 379 | cmd = ['ceph', '-v'] | 366 | cmd = ['ceph', '-v'] |
2351 | 380 | output = check_output(cmd) | 367 | output = check_output(cmd).decode('US-ASCII') |
2352 | 381 | output = output.split() | 368 | output = output.split() |
2353 | 382 | if len(output) > 3: | 369 | if len(output) > 3: |
2354 | 383 | return output[2] | 370 | return output[2] |
2355 | 384 | 371 | ||
2356 | === modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' | |||
2357 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-08 05:55:44 +0000 | |||
2358 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-05 07:18:23 +0000 | |||
2359 | @@ -1,12 +1,12 @@ | |||
2360 | 1 | |||
2361 | 2 | import os | 1 | import os |
2362 | 3 | import re | 2 | import re |
2363 | 4 | |||
2364 | 5 | from subprocess import ( | 3 | from subprocess import ( |
2365 | 6 | check_call, | 4 | check_call, |
2366 | 7 | check_output, | 5 | check_output, |
2367 | 8 | ) | 6 | ) |
2368 | 9 | 7 | ||
2369 | 8 | import six | ||
2370 | 9 | |||
2371 | 10 | 10 | ||
2372 | 11 | ################################################## | 11 | ################################################## |
2373 | 12 | # loopback device helpers. | 12 | # loopback device helpers. |
2374 | @@ -37,7 +37,7 @@ | |||
2375 | 37 | ''' | 37 | ''' |
2376 | 38 | file_path = os.path.abspath(file_path) | 38 | file_path = os.path.abspath(file_path) |
2377 | 39 | check_call(['losetup', '--find', file_path]) | 39 | check_call(['losetup', '--find', file_path]) |
2379 | 40 | for d, f in loopback_devices().iteritems(): | 40 | for d, f in six.iteritems(loopback_devices()): |
2380 | 41 | if f == file_path: | 41 | if f == file_path: |
2381 | 42 | return d | 42 | return d |
2382 | 43 | 43 | ||
2383 | @@ -51,7 +51,7 @@ | |||
2384 | 51 | 51 | ||
2385 | 52 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) | 52 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) |
2386 | 53 | ''' | 53 | ''' |
2388 | 54 | for d, f in loopback_devices().iteritems(): | 54 | for d, f in six.iteritems(loopback_devices()): |
2389 | 55 | if f == path: | 55 | if f == path: |
2390 | 56 | return d | 56 | return d |
2391 | 57 | 57 | ||
2392 | 58 | 58 | ||
2393 | === modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' | |||
2394 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:43:55 +0000 | |||
2395 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-05 07:18:23 +0000 | |||
2396 | @@ -61,6 +61,7 @@ | |||
2397 | 61 | vg = None | 61 | vg = None |
2398 | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() |
2399 | 63 | for l in pvd: | 63 | for l in pvd: |
2400 | 64 | l = l.decode('UTF-8') | ||
2401 | 64 | if l.strip().startswith('VG Name'): | 65 | if l.strip().startswith('VG Name'): |
2402 | 65 | vg = ' '.join(l.strip().split()[2:]) | 66 | vg = ' '.join(l.strip().split()[2:]) |
2403 | 66 | return vg | 67 | return vg |
2404 | 67 | 68 | ||
2405 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
2406 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:47 +0000 | |||
2407 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-05 07:18:23 +0000 | |||
2408 | @@ -30,7 +30,8 @@ | |||
2409 | 30 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up | 30 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
2410 | 31 | call(['sgdisk', '--zap-all', '--mbrtogpt', | 31 | call(['sgdisk', '--zap-all', '--mbrtogpt', |
2411 | 32 | '--clear', block_device]) | 32 | '--clear', block_device]) |
2413 | 33 | dev_end = check_output(['blockdev', '--getsz', block_device]) | 33 | dev_end = check_output(['blockdev', '--getsz', |
2414 | 34 | block_device]).decode('UTF-8') | ||
2415 | 34 | gpt_end = int(dev_end.split()[0]) - 100 | 35 | gpt_end = int(dev_end.split()[0]) - 100 |
2416 | 35 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), | 36 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
2417 | 36 | 'bs=1M', 'count=1']) | 37 | 'bs=1M', 'count=1']) |
2418 | @@ -47,7 +48,7 @@ | |||
2419 | 47 | it doesn't. | 48 | it doesn't. |
2420 | 48 | ''' | 49 | ''' |
2421 | 49 | is_partition = bool(re.search(r".*[0-9]+\b", device)) | 50 | is_partition = bool(re.search(r".*[0-9]+\b", device)) |
2423 | 50 | out = check_output(['mount']) | 51 | out = check_output(['mount']).decode('UTF-8') |
2424 | 51 | if is_partition: | 52 | if is_partition: |
2425 | 52 | return bool(re.search(device + r"\b", out)) | 53 | return bool(re.search(device + r"\b", out)) |
2426 | 53 | return bool(re.search(device + r"[0-9]+\b", out)) | 54 | return bool(re.search(device + r"[0-9]+\b", out)) |
2427 | 54 | 55 | ||
2428 | === modified file 'hooks/charmhelpers/core/fstab.py' | |||
2429 | --- hooks/charmhelpers/core/fstab.py 2014-06-24 13:40:39 +0000 | |||
2430 | +++ hooks/charmhelpers/core/fstab.py 2014-12-05 07:18:23 +0000 | |||
2431 | @@ -3,10 +3,11 @@ | |||
2432 | 3 | 3 | ||
2433 | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
2434 | 5 | 5 | ||
2435 | 6 | import io | ||
2436 | 6 | import os | 7 | import os |
2437 | 7 | 8 | ||
2438 | 8 | 9 | ||
2440 | 9 | class Fstab(file): | 10 | class Fstab(io.FileIO): |
2441 | 10 | """This class extends file in order to implement a file reader/writer | 11 | """This class extends file in order to implement a file reader/writer |
2442 | 11 | for file `/etc/fstab` | 12 | for file `/etc/fstab` |
2443 | 12 | """ | 13 | """ |
2444 | @@ -24,8 +25,8 @@ | |||
2445 | 24 | options = "defaults" | 25 | options = "defaults" |
2446 | 25 | 26 | ||
2447 | 26 | self.options = options | 27 | self.options = options |
2450 | 27 | self.d = d | 28 | self.d = int(d) |
2451 | 28 | self.p = p | 29 | self.p = int(p) |
2452 | 29 | 30 | ||
2453 | 30 | def __eq__(self, o): | 31 | def __eq__(self, o): |
2454 | 31 | return str(self) == str(o) | 32 | return str(self) == str(o) |
2455 | @@ -45,7 +46,7 @@ | |||
2456 | 45 | self._path = path | 46 | self._path = path |
2457 | 46 | else: | 47 | else: |
2458 | 47 | self._path = self.DEFAULT_PATH | 48 | self._path = self.DEFAULT_PATH |
2460 | 48 | file.__init__(self, self._path, 'r+') | 49 | super(Fstab, self).__init__(self._path, 'rb+') |
2461 | 49 | 50 | ||
2462 | 50 | def _hydrate_entry(self, line): | 51 | def _hydrate_entry(self, line): |
2463 | 51 | # NOTE: use split with no arguments to split on any | 52 | # NOTE: use split with no arguments to split on any |
2464 | @@ -58,8 +59,9 @@ | |||
2465 | 58 | def entries(self): | 59 | def entries(self): |
2466 | 59 | self.seek(0) | 60 | self.seek(0) |
2467 | 60 | for line in self.readlines(): | 61 | for line in self.readlines(): |
2468 | 62 | line = line.decode('us-ascii') | ||
2469 | 61 | try: | 63 | try: |
2471 | 62 | if not line.startswith("#"): | 64 | if line.strip() and not line.startswith("#"): |
2472 | 63 | yield self._hydrate_entry(line) | 65 | yield self._hydrate_entry(line) |
2473 | 64 | except ValueError: | 66 | except ValueError: |
2474 | 65 | pass | 67 | pass |
2475 | @@ -75,14 +77,14 @@ | |||
2476 | 75 | if self.get_entry_by_attr('device', entry.device): | 77 | if self.get_entry_by_attr('device', entry.device): |
2477 | 76 | return False | 78 | return False |
2478 | 77 | 79 | ||
2480 | 78 | self.write(str(entry) + '\n') | 80 | self.write((str(entry) + '\n').encode('us-ascii')) |
2481 | 79 | self.truncate() | 81 | self.truncate() |
2482 | 80 | return entry | 82 | return entry |
2483 | 81 | 83 | ||
2484 | 82 | def remove_entry(self, entry): | 84 | def remove_entry(self, entry): |
2485 | 83 | self.seek(0) | 85 | self.seek(0) |
2486 | 84 | 86 | ||
2488 | 85 | lines = self.readlines() | 87 | lines = [l.decode('us-ascii') for l in self.readlines()] |
2489 | 86 | 88 | ||
2490 | 87 | found = False | 89 | found = False |
2491 | 88 | for index, line in enumerate(lines): | 90 | for index, line in enumerate(lines): |
2492 | @@ -97,7 +99,7 @@ | |||
2493 | 97 | lines.remove(line) | 99 | lines.remove(line) |
2494 | 98 | 100 | ||
2495 | 99 | self.seek(0) | 101 | self.seek(0) |
2497 | 100 | self.write(''.join(lines)) | 102 | self.write(''.join(lines).encode('us-ascii')) |
2498 | 101 | self.truncate() | 103 | self.truncate() |
2499 | 102 | return True | 104 | return True |
2500 | 103 | 105 | ||
2501 | 104 | 106 | ||
2502 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
2503 | --- hooks/charmhelpers/core/hookenv.py 2014-09-25 15:37:05 +0000 | |||
2504 | +++ hooks/charmhelpers/core/hookenv.py 2014-12-05 07:18:23 +0000 | |||
2505 | @@ -9,9 +9,14 @@ | |||
2506 | 9 | import yaml | 9 | import yaml |
2507 | 10 | import subprocess | 10 | import subprocess |
2508 | 11 | import sys | 11 | import sys |
2509 | 12 | import UserDict | ||
2510 | 13 | from subprocess import CalledProcessError | 12 | from subprocess import CalledProcessError |
2511 | 14 | 13 | ||
2512 | 14 | import six | ||
2513 | 15 | if not six.PY3: | ||
2514 | 16 | from UserDict import UserDict | ||
2515 | 17 | else: | ||
2516 | 18 | from collections import UserDict | ||
2517 | 19 | |||
2518 | 15 | CRITICAL = "CRITICAL" | 20 | CRITICAL = "CRITICAL" |
2519 | 16 | ERROR = "ERROR" | 21 | ERROR = "ERROR" |
2520 | 17 | WARNING = "WARNING" | 22 | WARNING = "WARNING" |
2521 | @@ -63,16 +68,18 @@ | |||
2522 | 63 | command = ['juju-log'] | 68 | command = ['juju-log'] |
2523 | 64 | if level: | 69 | if level: |
2524 | 65 | command += ['-l', level] | 70 | command += ['-l', level] |
2525 | 71 | if not isinstance(message, six.string_types): | ||
2526 | 72 | message = repr(message) | ||
2527 | 66 | command += [message] | 73 | command += [message] |
2528 | 67 | subprocess.call(command) | 74 | subprocess.call(command) |
2529 | 68 | 75 | ||
2530 | 69 | 76 | ||
2532 | 70 | class Serializable(UserDict.IterableUserDict): | 77 | class Serializable(UserDict): |
2533 | 71 | """Wrapper, an object that can be serialized to yaml or json""" | 78 | """Wrapper, an object that can be serialized to yaml or json""" |
2534 | 72 | 79 | ||
2535 | 73 | def __init__(self, obj): | 80 | def __init__(self, obj): |
2536 | 74 | # wrap the object | 81 | # wrap the object |
2538 | 75 | UserDict.IterableUserDict.__init__(self) | 82 | UserDict.__init__(self) |
2539 | 76 | self.data = obj | 83 | self.data = obj |
2540 | 77 | 84 | ||
2541 | 78 | def __getattr__(self, attr): | 85 | def __getattr__(self, attr): |
2542 | @@ -214,6 +221,12 @@ | |||
2543 | 214 | except KeyError: | 221 | except KeyError: |
2544 | 215 | return (self._prev_dict or {})[key] | 222 | return (self._prev_dict or {})[key] |
2545 | 216 | 223 | ||
2546 | 224 | def keys(self): | ||
2547 | 225 | prev_keys = [] | ||
2548 | 226 | if self._prev_dict is not None: | ||
2549 | 227 | prev_keys = self._prev_dict.keys() | ||
2550 | 228 | return list(set(prev_keys + list(dict.keys(self)))) | ||
2551 | 229 | |||
2552 | 217 | def load_previous(self, path=None): | 230 | def load_previous(self, path=None): |
2553 | 218 | """Load previous copy of config from disk. | 231 | """Load previous copy of config from disk. |
2554 | 219 | 232 | ||
2555 | @@ -263,7 +276,7 @@ | |||
2556 | 263 | 276 | ||
2557 | 264 | """ | 277 | """ |
2558 | 265 | if self._prev_dict: | 278 | if self._prev_dict: |
2560 | 266 | for k, v in self._prev_dict.iteritems(): | 279 | for k, v in six.iteritems(self._prev_dict): |
2561 | 267 | if k not in self: | 280 | if k not in self: |
2562 | 268 | self[k] = v | 281 | self[k] = v |
2563 | 269 | with open(self.path, 'w') as f: | 282 | with open(self.path, 'w') as f: |
2564 | @@ -278,7 +291,8 @@ | |||
2565 | 278 | config_cmd_line.append(scope) | 291 | config_cmd_line.append(scope) |
2566 | 279 | config_cmd_line.append('--format=json') | 292 | config_cmd_line.append('--format=json') |
2567 | 280 | try: | 293 | try: |
2569 | 281 | config_data = json.loads(subprocess.check_output(config_cmd_line)) | 294 | config_data = json.loads( |
2570 | 295 | subprocess.check_output(config_cmd_line).decode('UTF-8')) | ||
2571 | 282 | if scope is not None: | 296 | if scope is not None: |
2572 | 283 | return config_data | 297 | return config_data |
2573 | 284 | return Config(config_data) | 298 | return Config(config_data) |
2574 | @@ -297,10 +311,10 @@ | |||
2575 | 297 | if unit: | 311 | if unit: |
2576 | 298 | _args.append(unit) | 312 | _args.append(unit) |
2577 | 299 | try: | 313 | try: |
2579 | 300 | return json.loads(subprocess.check_output(_args)) | 314 | return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
2580 | 301 | except ValueError: | 315 | except ValueError: |
2581 | 302 | return None | 316 | return None |
2583 | 303 | except CalledProcessError, e: | 317 | except CalledProcessError as e: |
2584 | 304 | if e.returncode == 2: | 318 | if e.returncode == 2: |
2585 | 305 | return None | 319 | return None |
2586 | 306 | raise | 320 | raise |
2587 | @@ -312,7 +326,7 @@ | |||
2588 | 312 | relation_cmd_line = ['relation-set'] | 326 | relation_cmd_line = ['relation-set'] |
2589 | 313 | if relation_id is not None: | 327 | if relation_id is not None: |
2590 | 314 | relation_cmd_line.extend(('-r', relation_id)) | 328 | relation_cmd_line.extend(('-r', relation_id)) |
2592 | 315 | for k, v in (relation_settings.items() + kwargs.items()): | 329 | for k, v in (list(relation_settings.items()) + list(kwargs.items())): |
2593 | 316 | if v is None: | 330 | if v is None: |
2594 | 317 | relation_cmd_line.append('{}='.format(k)) | 331 | relation_cmd_line.append('{}='.format(k)) |
2595 | 318 | else: | 332 | else: |
2596 | @@ -329,7 +343,8 @@ | |||
2597 | 329 | relid_cmd_line = ['relation-ids', '--format=json'] | 343 | relid_cmd_line = ['relation-ids', '--format=json'] |
2598 | 330 | if reltype is not None: | 344 | if reltype is not None: |
2599 | 331 | relid_cmd_line.append(reltype) | 345 | relid_cmd_line.append(reltype) |
2601 | 332 | return json.loads(subprocess.check_output(relid_cmd_line)) or [] | 346 | return json.loads( |
2602 | 347 | subprocess.check_output(relid_cmd_line).decode('UTF-8')) or [] | ||
2603 | 333 | return [] | 348 | return [] |
2604 | 334 | 349 | ||
2605 | 335 | 350 | ||
2606 | @@ -340,7 +355,8 @@ | |||
2607 | 340 | units_cmd_line = ['relation-list', '--format=json'] | 355 | units_cmd_line = ['relation-list', '--format=json'] |
2608 | 341 | if relid is not None: | 356 | if relid is not None: |
2609 | 342 | units_cmd_line.extend(('-r', relid)) | 357 | units_cmd_line.extend(('-r', relid)) |
2611 | 343 | return json.loads(subprocess.check_output(units_cmd_line)) or [] | 358 | return json.loads( |
2612 | 359 | subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] | ||
2613 | 344 | 360 | ||
2614 | 345 | 361 | ||
2615 | 346 | @cached | 362 | @cached |
2616 | @@ -449,7 +465,7 @@ | |||
2617 | 449 | """Get the unit ID for the remote unit""" | 465 | """Get the unit ID for the remote unit""" |
2618 | 450 | _args = ['unit-get', '--format=json', attribute] | 466 | _args = ['unit-get', '--format=json', attribute] |
2619 | 451 | try: | 467 | try: |
2621 | 452 | return json.loads(subprocess.check_output(_args)) | 468 | return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
2622 | 453 | except ValueError: | 469 | except ValueError: |
2623 | 454 | return None | 470 | return None |
2624 | 455 | 471 | ||
2625 | 456 | 472 | ||
2626 | === modified file 'hooks/charmhelpers/core/host.py' | |||
2627 | --- hooks/charmhelpers/core/host.py 2014-10-16 17:42:14 +0000 | |||
2628 | +++ hooks/charmhelpers/core/host.py 2014-12-05 07:18:23 +0000 | |||
2629 | @@ -13,13 +13,13 @@ | |||
2630 | 13 | import string | 13 | import string |
2631 | 14 | import subprocess | 14 | import subprocess |
2632 | 15 | import hashlib | 15 | import hashlib |
2633 | 16 | import shutil | ||
2634 | 17 | from contextlib import contextmanager | 16 | from contextlib import contextmanager |
2635 | 18 | |||
2636 | 19 | from collections import OrderedDict | 17 | from collections import OrderedDict |
2637 | 20 | 18 | ||
2640 | 21 | from hookenv import log | 19 | import six |
2641 | 22 | from fstab import Fstab | 20 | |
2642 | 21 | from .hookenv import log | ||
2643 | 22 | from .fstab import Fstab | ||
2644 | 23 | 23 | ||
2645 | 24 | 24 | ||
2646 | 25 | def service_start(service_name): | 25 | def service_start(service_name): |
2647 | @@ -55,7 +55,9 @@ | |||
2648 | 55 | def service_running(service): | 55 | def service_running(service): |
2649 | 56 | """Determine whether a system service is running""" | 56 | """Determine whether a system service is running""" |
2650 | 57 | try: | 57 | try: |
2652 | 58 | output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) | 58 | output = subprocess.check_output( |
2653 | 59 | ['service', service, 'status'], | ||
2654 | 60 | stderr=subprocess.STDOUT).decode('UTF-8') | ||
2655 | 59 | except subprocess.CalledProcessError: | 61 | except subprocess.CalledProcessError: |
2656 | 60 | return False | 62 | return False |
2657 | 61 | else: | 63 | else: |
2658 | @@ -68,7 +70,9 @@ | |||
2659 | 68 | def service_available(service_name): | 70 | def service_available(service_name): |
2660 | 69 | """Determine whether a system service is available""" | 71 | """Determine whether a system service is available""" |
2661 | 70 | try: | 72 | try: |
2663 | 71 | subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) | 73 | subprocess.check_output( |
2664 | 74 | ['service', service_name, 'status'], | ||
2665 | 75 | stderr=subprocess.STDOUT).decode('UTF-8') | ||
2666 | 72 | except subprocess.CalledProcessError as e: | 76 | except subprocess.CalledProcessError as e: |
2667 | 73 | return 'unrecognized service' not in e.output | 77 | return 'unrecognized service' not in e.output |
2668 | 74 | else: | 78 | else: |
2669 | @@ -97,6 +101,26 @@ | |||
2670 | 97 | return user_info | 101 | return user_info |
2671 | 98 | 102 | ||
2672 | 99 | 103 | ||
2673 | 104 | def add_group(group_name, system_group=False): | ||
2674 | 105 | """Add a group to the system""" | ||
2675 | 106 | try: | ||
2676 | 107 | group_info = grp.getgrnam(group_name) | ||
2677 | 108 | log('group {0} already exists!'.format(group_name)) | ||
2678 | 109 | except KeyError: | ||
2679 | 110 | log('creating group {0}'.format(group_name)) | ||
2680 | 111 | cmd = ['addgroup'] | ||
2681 | 112 | if system_group: | ||
2682 | 113 | cmd.append('--system') | ||
2683 | 114 | else: | ||
2684 | 115 | cmd.extend([ | ||
2685 | 116 | '--group', | ||
2686 | 117 | ]) | ||
2687 | 118 | cmd.append(group_name) | ||
2688 | 119 | subprocess.check_call(cmd) | ||
2689 | 120 | group_info = grp.getgrnam(group_name) | ||
2690 | 121 | return group_info | ||
2691 | 122 | |||
2692 | 123 | |||
2693 | 100 | def add_user_to_group(username, group): | 124 | def add_user_to_group(username, group): |
2694 | 101 | """Add a user to a group""" | 125 | """Add a user to a group""" |
2695 | 102 | cmd = [ | 126 | cmd = [ |
2696 | @@ -116,7 +140,7 @@ | |||
2697 | 116 | cmd.append(from_path) | 140 | cmd.append(from_path) |
2698 | 117 | cmd.append(to_path) | 141 | cmd.append(to_path) |
2699 | 118 | log(" ".join(cmd)) | 142 | log(" ".join(cmd)) |
2701 | 119 | return subprocess.check_output(cmd).strip() | 143 | return subprocess.check_output(cmd).decode('UTF-8').strip() |
2702 | 120 | 144 | ||
2703 | 121 | 145 | ||
2704 | 122 | def symlink(source, destination): | 146 | def symlink(source, destination): |
2705 | @@ -131,7 +155,7 @@ | |||
2706 | 131 | subprocess.check_call(cmd) | 155 | subprocess.check_call(cmd) |
2707 | 132 | 156 | ||
2708 | 133 | 157 | ||
2710 | 134 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | 158 | def mkdir(path, owner='root', group='root', perms=0o555, force=False): |
2711 | 135 | """Create a directory""" | 159 | """Create a directory""" |
2712 | 136 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | 160 | log("Making dir {} {}:{} {:o}".format(path, owner, group, |
2713 | 137 | perms)) | 161 | perms)) |
2714 | @@ -147,7 +171,7 @@ | |||
2715 | 147 | os.chown(realpath, uid, gid) | 171 | os.chown(realpath, uid, gid) |
2716 | 148 | 172 | ||
2717 | 149 | 173 | ||
2719 | 150 | def write_file(path, content, owner='root', group='root', perms=0444): | 174 | def write_file(path, content, owner='root', group='root', perms=0o444): |
2720 | 151 | """Create or overwrite a file with the contents of a string""" | 175 | """Create or overwrite a file with the contents of a string""" |
2721 | 152 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) | 176 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
2722 | 153 | uid = pwd.getpwnam(owner).pw_uid | 177 | uid = pwd.getpwnam(owner).pw_uid |
2723 | @@ -178,7 +202,7 @@ | |||
2724 | 178 | cmd_args.extend([device, mountpoint]) | 202 | cmd_args.extend([device, mountpoint]) |
2725 | 179 | try: | 203 | try: |
2726 | 180 | subprocess.check_output(cmd_args) | 204 | subprocess.check_output(cmd_args) |
2728 | 181 | except subprocess.CalledProcessError, e: | 205 | except subprocess.CalledProcessError as e: |
2729 | 182 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | 206 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
2730 | 183 | return False | 207 | return False |
2731 | 184 | 208 | ||
2732 | @@ -192,7 +216,7 @@ | |||
2733 | 192 | cmd_args = ['umount', mountpoint] | 216 | cmd_args = ['umount', mountpoint] |
2734 | 193 | try: | 217 | try: |
2735 | 194 | subprocess.check_output(cmd_args) | 218 | subprocess.check_output(cmd_args) |
2737 | 195 | except subprocess.CalledProcessError, e: | 219 | except subprocess.CalledProcessError as e: |
2738 | 196 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | 220 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
2739 | 197 | return False | 221 | return False |
2740 | 198 | 222 | ||
2741 | @@ -219,8 +243,8 @@ | |||
2742 | 219 | """ | 243 | """ |
2743 | 220 | if os.path.exists(path): | 244 | if os.path.exists(path): |
2744 | 221 | h = getattr(hashlib, hash_type)() | 245 | h = getattr(hashlib, hash_type)() |
2747 | 222 | with open(path, 'r') as source: | 246 | with open(path, 'rb') as source: |
2748 | 223 | h.update(source.read()) # IGNORE:E1101 - it does have update | 247 | h.update(source.read()) |
2749 | 224 | return h.hexdigest() | 248 | return h.hexdigest() |
2750 | 225 | else: | 249 | else: |
2751 | 226 | return None | 250 | return None |
2752 | @@ -298,7 +322,7 @@ | |||
2753 | 298 | if length is None: | 322 | if length is None: |
2754 | 299 | length = random.choice(range(35, 45)) | 323 | length = random.choice(range(35, 45)) |
2755 | 300 | alphanumeric_chars = [ | 324 | alphanumeric_chars = [ |
2757 | 301 | l for l in (string.letters + string.digits) | 325 | l for l in (string.ascii_letters + string.digits) |
2758 | 302 | if l not in 'l0QD1vAEIOUaeiou'] | 326 | if l not in 'l0QD1vAEIOUaeiou'] |
2759 | 303 | random_chars = [ | 327 | random_chars = [ |
2760 | 304 | random.choice(alphanumeric_chars) for _ in range(length)] | 328 | random.choice(alphanumeric_chars) for _ in range(length)] |
2761 | @@ -307,14 +331,14 @@ | |||
2762 | 307 | 331 | ||
2763 | 308 | def list_nics(nic_type): | 332 | def list_nics(nic_type): |
2764 | 309 | '''Return a list of nics of given type(s)''' | 333 | '''Return a list of nics of given type(s)''' |
2766 | 310 | if isinstance(nic_type, basestring): | 334 | if isinstance(nic_type, six.string_types): |
2767 | 311 | int_types = [nic_type] | 335 | int_types = [nic_type] |
2768 | 312 | else: | 336 | else: |
2769 | 313 | int_types = nic_type | 337 | int_types = nic_type |
2770 | 314 | interfaces = [] | 338 | interfaces = [] |
2771 | 315 | for int_type in int_types: | 339 | for int_type in int_types: |
2772 | 316 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] | 340 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
2774 | 317 | ip_output = subprocess.check_output(cmd).split('\n') | 341 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
2775 | 318 | ip_output = (line for line in ip_output if line) | 342 | ip_output = (line for line in ip_output if line) |
2776 | 319 | for line in ip_output: | 343 | for line in ip_output: |
2777 | 320 | if line.split()[1].startswith(int_type): | 344 | if line.split()[1].startswith(int_type): |
2778 | @@ -328,15 +352,44 @@ | |||
2779 | 328 | return interfaces | 352 | return interfaces |
2780 | 329 | 353 | ||
2781 | 330 | 354 | ||
2783 | 331 | def set_nic_mtu(nic, mtu): | 355 | def set_nic_mtu(nic, mtu, persistence=False): |
2784 | 332 | '''Set MTU on a network interface''' | 356 | '''Set MTU on a network interface''' |
2785 | 333 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] | 357 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
2786 | 334 | subprocess.check_call(cmd) | 358 | subprocess.check_call(cmd) |
2787 | 359 | # persistence mtu configuration | ||
2788 | 360 | if not persistence: | ||
2789 | 361 | return | ||
2790 | 362 | if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic): | ||
2791 | 363 | nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic | ||
2792 | 364 | else: | ||
2793 | 365 | nic_cfg_file = "/etc/network/interfaces" | ||
2794 | 366 | f = open(nic_cfg_file,"r") | ||
2795 | 367 | lines = f.readlines() | ||
2796 | 368 | found = False | ||
2797 | 369 | length = len(lines) | ||
2798 | 370 | for i in range(len(lines)): | ||
2799 | 371 | lines[i] = lines[i].replace('\n', '') | ||
2800 | 372 | if lines[i].startswith("iface %s" % nic): | ||
2801 | 373 | found = True | ||
2802 | 374 | lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu) | ||
2803 | 375 | lines.insert(i+2, " down ip link set $IFACE mtu 1500") | ||
2804 | 376 | if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"): | ||
2805 | 377 | del lines[i+3] | ||
2806 | 378 | if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"): | ||
2807 | 379 | del lines[i+3] | ||
2808 | 380 | break | ||
2809 | 381 | if not found: | ||
2810 | 382 | lines.insert(length+1, "") | ||
2811 | 383 | lines.insert(length+2, "auto %s" % nic) | ||
2812 | 384 | lines.insert(length+3, "iface %s inet dhcp" % nic) | ||
2813 | 385 | lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu) | ||
2814 | 386 | lines.insert(length+5, " down ip link set $IFACE mtu 1500") | ||
2815 | 387 | write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644) | ||
2816 | 335 | 388 | ||
2817 | 336 | 389 | ||
2818 | 337 | def get_nic_mtu(nic): | 390 | def get_nic_mtu(nic): |
2819 | 338 | cmd = ['ip', 'addr', 'show', nic] | 391 | cmd = ['ip', 'addr', 'show', nic] |
2821 | 339 | ip_output = subprocess.check_output(cmd).split('\n') | 392 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
2822 | 340 | mtu = "" | 393 | mtu = "" |
2823 | 341 | for line in ip_output: | 394 | for line in ip_output: |
2824 | 342 | words = line.split() | 395 | words = line.split() |
2825 | @@ -347,7 +400,7 @@ | |||
2826 | 347 | 400 | ||
2827 | 348 | def get_nic_hwaddr(nic): | 401 | def get_nic_hwaddr(nic): |
2828 | 349 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] | 402 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
2830 | 350 | ip_output = subprocess.check_output(cmd) | 403 | ip_output = subprocess.check_output(cmd).decode('UTF-8') |
2831 | 351 | hwaddr = "" | 404 | hwaddr = "" |
2832 | 352 | words = ip_output.split() | 405 | words = ip_output.split() |
2833 | 353 | if 'link/ether' in words: | 406 | if 'link/ether' in words: |
2834 | @@ -364,8 +417,8 @@ | |||
2835 | 364 | 417 | ||
2836 | 365 | ''' | 418 | ''' |
2837 | 366 | import apt_pkg | 419 | import apt_pkg |
2838 | 367 | from charmhelpers.fetch import apt_cache | ||
2839 | 368 | if not pkgcache: | 420 | if not pkgcache: |
2840 | 421 | from charmhelpers.fetch import apt_cache | ||
2841 | 369 | pkgcache = apt_cache() | 422 | pkgcache = apt_cache() |
2842 | 370 | pkg = pkgcache[package] | 423 | pkg = pkgcache[package] |
2843 | 371 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) | 424 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
2844 | 372 | 425 | ||
2845 | === modified file 'hooks/charmhelpers/core/services/__init__.py' | |||
2846 | --- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:47 +0000 | |||
2847 | +++ hooks/charmhelpers/core/services/__init__.py 2014-12-05 07:18:23 +0000 | |||
2848 | @@ -1,2 +1,2 @@ | |||
2851 | 1 | from .base import * | 1 | from .base import * # NOQA |
2852 | 2 | from .helpers import * | 2 | from .helpers import * # NOQA |
2853 | 3 | 3 | ||
2854 | === modified file 'hooks/charmhelpers/core/services/helpers.py' | |||
2855 | --- hooks/charmhelpers/core/services/helpers.py 2014-09-25 15:37:05 +0000 | |||
2856 | +++ hooks/charmhelpers/core/services/helpers.py 2014-12-05 07:18:23 +0000 | |||
2857 | @@ -196,7 +196,7 @@ | |||
2858 | 196 | if not os.path.isabs(file_name): | 196 | if not os.path.isabs(file_name): |
2859 | 197 | file_name = os.path.join(hookenv.charm_dir(), file_name) | 197 | file_name = os.path.join(hookenv.charm_dir(), file_name) |
2860 | 198 | with open(file_name, 'w') as file_stream: | 198 | with open(file_name, 'w') as file_stream: |
2862 | 199 | os.fchmod(file_stream.fileno(), 0600) | 199 | os.fchmod(file_stream.fileno(), 0o600) |
2863 | 200 | yaml.dump(config_data, file_stream) | 200 | yaml.dump(config_data, file_stream) |
2864 | 201 | 201 | ||
2865 | 202 | def read_context(self, file_name): | 202 | def read_context(self, file_name): |
2866 | @@ -211,15 +211,19 @@ | |||
2867 | 211 | 211 | ||
2868 | 212 | class TemplateCallback(ManagerCallback): | 212 | class TemplateCallback(ManagerCallback): |
2869 | 213 | """ | 213 | """ |
2873 | 214 | Callback class that will render a Jinja2 template, for use as a ready action. | 214 | Callback class that will render a Jinja2 template, for use as a ready |
2874 | 215 | 215 | action. | |
2875 | 216 | :param str source: The template source file, relative to `$CHARM_DIR/templates` | 216 | |
2876 | 217 | :param str source: The template source file, relative to | ||
2877 | 218 | `$CHARM_DIR/templates` | ||
2878 | 219 | |||
2879 | 217 | :param str target: The target to write the rendered template to | 220 | :param str target: The target to write the rendered template to |
2880 | 218 | :param str owner: The owner of the rendered file | 221 | :param str owner: The owner of the rendered file |
2881 | 219 | :param str group: The group of the rendered file | 222 | :param str group: The group of the rendered file |
2882 | 220 | :param int perms: The permissions of the rendered file | 223 | :param int perms: The permissions of the rendered file |
2883 | 221 | """ | 224 | """ |
2885 | 222 | def __init__(self, source, target, owner='root', group='root', perms=0444): | 225 | def __init__(self, source, target, |
2886 | 226 | owner='root', group='root', perms=0o444): | ||
2887 | 223 | self.source = source | 227 | self.source = source |
2888 | 224 | self.target = target | 228 | self.target = target |
2889 | 225 | self.owner = owner | 229 | self.owner = owner |
2890 | 226 | 230 | ||
2891 | === modified file 'hooks/charmhelpers/core/templating.py' | |||
2892 | --- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:47 +0000 | |||
2893 | +++ hooks/charmhelpers/core/templating.py 2014-12-05 07:18:23 +0000 | |||
2894 | @@ -4,7 +4,8 @@ | |||
2895 | 4 | from charmhelpers.core import hookenv | 4 | from charmhelpers.core import hookenv |
2896 | 5 | 5 | ||
2897 | 6 | 6 | ||
2899 | 7 | def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): | 7 | def render(source, target, context, owner='root', group='root', |
2900 | 8 | perms=0o444, templates_dir=None): | ||
2901 | 8 | """ | 9 | """ |
2902 | 9 | Render a template. | 10 | Render a template. |
2903 | 10 | 11 | ||
2904 | 11 | 12 | ||
2905 | === modified file 'hooks/charmhelpers/fetch/__init__.py' | |||
2906 | --- hooks/charmhelpers/fetch/__init__.py 2014-09-25 15:37:05 +0000 | |||
2907 | +++ hooks/charmhelpers/fetch/__init__.py 2014-12-05 07:18:23 +0000 | |||
2908 | @@ -5,10 +5,6 @@ | |||
2909 | 5 | from charmhelpers.core.host import ( | 5 | from charmhelpers.core.host import ( |
2910 | 6 | lsb_release | 6 | lsb_release |
2911 | 7 | ) | 7 | ) |
2912 | 8 | from urlparse import ( | ||
2913 | 9 | urlparse, | ||
2914 | 10 | urlunparse, | ||
2915 | 11 | ) | ||
2916 | 12 | import subprocess | 8 | import subprocess |
2917 | 13 | from charmhelpers.core.hookenv import ( | 9 | from charmhelpers.core.hookenv import ( |
2918 | 14 | config, | 10 | config, |
2919 | @@ -16,6 +12,12 @@ | |||
2920 | 16 | ) | 12 | ) |
2921 | 17 | import os | 13 | import os |
2922 | 18 | 14 | ||
2923 | 15 | import six | ||
2924 | 16 | if six.PY3: | ||
2925 | 17 | from urllib.parse import urlparse, urlunparse | ||
2926 | 18 | else: | ||
2927 | 19 | from urlparse import urlparse, urlunparse | ||
2928 | 20 | |||
2929 | 19 | 21 | ||
2930 | 20 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | 22 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
2931 | 21 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | 23 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
2932 | @@ -72,6 +74,7 @@ | |||
2933 | 72 | FETCH_HANDLERS = ( | 74 | FETCH_HANDLERS = ( |
2934 | 73 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | 75 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
2935 | 74 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | 76 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
2936 | 77 | 'charmhelpers.fetch.giturl.GitUrlFetchHandler', | ||
2937 | 75 | ) | 78 | ) |
2938 | 76 | 79 | ||
2939 | 77 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. | 80 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
2940 | @@ -148,7 +151,7 @@ | |||
2941 | 148 | cmd = ['apt-get', '--assume-yes'] | 151 | cmd = ['apt-get', '--assume-yes'] |
2942 | 149 | cmd.extend(options) | 152 | cmd.extend(options) |
2943 | 150 | cmd.append('install') | 153 | cmd.append('install') |
2945 | 151 | if isinstance(packages, basestring): | 154 | if isinstance(packages, six.string_types): |
2946 | 152 | cmd.append(packages) | 155 | cmd.append(packages) |
2947 | 153 | else: | 156 | else: |
2948 | 154 | cmd.extend(packages) | 157 | cmd.extend(packages) |
2949 | @@ -181,7 +184,7 @@ | |||
2950 | 181 | def apt_purge(packages, fatal=False): | 184 | def apt_purge(packages, fatal=False): |
2951 | 182 | """Purge one or more packages""" | 185 | """Purge one or more packages""" |
2952 | 183 | cmd = ['apt-get', '--assume-yes', 'purge'] | 186 | cmd = ['apt-get', '--assume-yes', 'purge'] |
2954 | 184 | if isinstance(packages, basestring): | 187 | if isinstance(packages, six.string_types): |
2955 | 185 | cmd.append(packages) | 188 | cmd.append(packages) |
2956 | 186 | else: | 189 | else: |
2957 | 187 | cmd.extend(packages) | 190 | cmd.extend(packages) |
2958 | @@ -192,7 +195,7 @@ | |||
2959 | 192 | def apt_hold(packages, fatal=False): | 195 | def apt_hold(packages, fatal=False): |
2960 | 193 | """Hold one or more packages""" | 196 | """Hold one or more packages""" |
2961 | 194 | cmd = ['apt-mark', 'hold'] | 197 | cmd = ['apt-mark', 'hold'] |
2963 | 195 | if isinstance(packages, basestring): | 198 | if isinstance(packages, six.string_types): |
2964 | 196 | cmd.append(packages) | 199 | cmd.append(packages) |
2965 | 197 | else: | 200 | else: |
2966 | 198 | cmd.extend(packages) | 201 | cmd.extend(packages) |
2967 | @@ -218,6 +221,7 @@ | |||
2968 | 218 | pocket for the release. | 221 | pocket for the release. |
2969 | 219 | 'cloud:' may be used to activate official cloud archive pockets, | 222 | 'cloud:' may be used to activate official cloud archive pockets, |
2970 | 220 | such as 'cloud:icehouse' | 223 | such as 'cloud:icehouse' |
2971 | 224 | 'distro' may be used as a noop | ||
2972 | 221 | 225 | ||
2973 | 222 | @param key: A key to be added to the system's APT keyring and used | 226 | @param key: A key to be added to the system's APT keyring and used |
2974 | 223 | to verify the signatures on packages. Ideally, this should be an | 227 | to verify the signatures on packages. Ideally, this should be an |
2975 | @@ -251,12 +255,14 @@ | |||
2976 | 251 | release = lsb_release()['DISTRIB_CODENAME'] | 255 | release = lsb_release()['DISTRIB_CODENAME'] |
2977 | 252 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | 256 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
2978 | 253 | apt.write(PROPOSED_POCKET.format(release)) | 257 | apt.write(PROPOSED_POCKET.format(release)) |
2979 | 258 | elif source == 'distro': | ||
2980 | 259 | pass | ||
2981 | 254 | else: | 260 | else: |
2983 | 255 | raise SourceConfigError("Unknown source: {!r}".format(source)) | 261 | log("Unknown source: {!r}".format(source)) |
2984 | 256 | 262 | ||
2985 | 257 | if key: | 263 | if key: |
2986 | 258 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: | 264 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
2988 | 259 | with NamedTemporaryFile() as key_file: | 265 | with NamedTemporaryFile('w+') as key_file: |
2989 | 260 | key_file.write(key) | 266 | key_file.write(key) |
2990 | 261 | key_file.flush() | 267 | key_file.flush() |
2991 | 262 | key_file.seek(0) | 268 | key_file.seek(0) |
2992 | @@ -293,14 +299,14 @@ | |||
2993 | 293 | sources = safe_load((config(sources_var) or '').strip()) or [] | 299 | sources = safe_load((config(sources_var) or '').strip()) or [] |
2994 | 294 | keys = safe_load((config(keys_var) or '').strip()) or None | 300 | keys = safe_load((config(keys_var) or '').strip()) or None |
2995 | 295 | 301 | ||
2997 | 296 | if isinstance(sources, basestring): | 302 | if isinstance(sources, six.string_types): |
2998 | 297 | sources = [sources] | 303 | sources = [sources] |
2999 | 298 | 304 | ||
3000 | 299 | if keys is None: | 305 | if keys is None: |
3001 | 300 | for source in sources: | 306 | for source in sources: |
3002 | 301 | add_source(source, None) | 307 | add_source(source, None) |
3003 | 302 | else: | 308 | else: |
3005 | 303 | if isinstance(keys, basestring): | 309 | if isinstance(keys, six.string_types): |
3006 | 304 | keys = [keys] | 310 | keys = [keys] |
3007 | 305 | 311 | ||
3008 | 306 | if len(sources) != len(keys): | 312 | if len(sources) != len(keys): |
3009 | @@ -397,7 +403,7 @@ | |||
3010 | 397 | while result is None or result == APT_NO_LOCK: | 403 | while result is None or result == APT_NO_LOCK: |
3011 | 398 | try: | 404 | try: |
3012 | 399 | result = subprocess.check_call(cmd, env=env) | 405 | result = subprocess.check_call(cmd, env=env) |
3014 | 400 | except subprocess.CalledProcessError, e: | 406 | except subprocess.CalledProcessError as e: |
3015 | 401 | retry_count = retry_count + 1 | 407 | retry_count = retry_count + 1 |
3016 | 402 | if retry_count > APT_NO_LOCK_RETRY_COUNT: | 408 | if retry_count > APT_NO_LOCK_RETRY_COUNT: |
3017 | 403 | raise | 409 | raise |
3018 | 404 | 410 | ||
3019 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
3020 | --- hooks/charmhelpers/fetch/archiveurl.py 2014-09-25 15:37:05 +0000 | |||
3021 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-05 07:18:23 +0000 | |||
3022 | @@ -1,8 +1,23 @@ | |||
3023 | 1 | import os | 1 | import os |
3024 | 2 | import urllib2 | ||
3025 | 3 | from urllib import urlretrieve | ||
3026 | 4 | import urlparse | ||
3027 | 5 | import hashlib | 2 | import hashlib |
3028 | 3 | import re | ||
3029 | 4 | |||
3030 | 5 | import six | ||
3031 | 6 | if six.PY3: | ||
3032 | 7 | from urllib.request import ( | ||
3033 | 8 | build_opener, install_opener, urlopen, urlretrieve, | ||
3034 | 9 | HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, | ||
3035 | 10 | ) | ||
3036 | 11 | from urllib.parse import urlparse, urlunparse, parse_qs | ||
3037 | 12 | from urllib.error import URLError | ||
3038 | 13 | else: | ||
3039 | 14 | from urllib import urlretrieve | ||
3040 | 15 | from urllib2 import ( | ||
3041 | 16 | build_opener, install_opener, urlopen, | ||
3042 | 17 | HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, | ||
3043 | 18 | URLError | ||
3044 | 19 | ) | ||
3045 | 20 | from urlparse import urlparse, urlunparse, parse_qs | ||
3046 | 6 | 21 | ||
3047 | 7 | from charmhelpers.fetch import ( | 22 | from charmhelpers.fetch import ( |
3048 | 8 | BaseFetchHandler, | 23 | BaseFetchHandler, |
3049 | @@ -15,6 +30,24 @@ | |||
3050 | 15 | from charmhelpers.core.host import mkdir, check_hash | 30 | from charmhelpers.core.host import mkdir, check_hash |
3051 | 16 | 31 | ||
3052 | 17 | 32 | ||
3053 | 33 | def splituser(host): | ||
3054 | 34 | '''urllib.splituser(), but six's support of this seems broken''' | ||
3055 | 35 | _userprog = re.compile('^(.*)@(.*)$') | ||
3056 | 36 | match = _userprog.match(host) | ||
3057 | 37 | if match: | ||
3058 | 38 | return match.group(1, 2) | ||
3059 | 39 | return None, host | ||
3060 | 40 | |||
3061 | 41 | |||
3062 | 42 | def splitpasswd(user): | ||
3063 | 43 | '''urllib.splitpasswd(), but six's support of this is missing''' | ||
3064 | 44 | _passwdprog = re.compile('^([^:]*):(.*)$', re.S) | ||
3065 | 45 | match = _passwdprog.match(user) | ||
3066 | 46 | if match: | ||
3067 | 47 | return match.group(1, 2) | ||
3068 | 48 | return user, None | ||
3069 | 49 | |||
3070 | 50 | |||
3071 | 18 | class ArchiveUrlFetchHandler(BaseFetchHandler): | 51 | class ArchiveUrlFetchHandler(BaseFetchHandler): |
3072 | 19 | """ | 52 | """ |
3073 | 20 | Handler to download archive files from arbitrary URLs. | 53 | Handler to download archive files from arbitrary URLs. |
3074 | @@ -42,20 +75,20 @@ | |||
3075 | 42 | """ | 75 | """ |
3076 | 43 | # propogate all exceptions | 76 | # propogate all exceptions |
3077 | 44 | # URLError, OSError, etc | 77 | # URLError, OSError, etc |
3079 | 45 | proto, netloc, path, params, query, fragment = urlparse.urlparse(source) | 78 | proto, netloc, path, params, query, fragment = urlparse(source) |
3080 | 46 | if proto in ('http', 'https'): | 79 | if proto in ('http', 'https'): |
3082 | 47 | auth, barehost = urllib2.splituser(netloc) | 80 | auth, barehost = splituser(netloc) |
3083 | 48 | if auth is not None: | 81 | if auth is not None: |
3087 | 49 | source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) | 82 | source = urlunparse((proto, barehost, path, params, query, fragment)) |
3088 | 50 | username, password = urllib2.splitpasswd(auth) | 83 | username, password = splitpasswd(auth) |
3089 | 51 | passman = urllib2.HTTPPasswordMgrWithDefaultRealm() | 84 | passman = HTTPPasswordMgrWithDefaultRealm() |
3090 | 52 | # Realm is set to None in add_password to force the username and password | 85 | # Realm is set to None in add_password to force the username and password |
3091 | 53 | # to be used whatever the realm | 86 | # to be used whatever the realm |
3092 | 54 | passman.add_password(None, source, username, password) | 87 | passman.add_password(None, source, username, password) |
3097 | 55 | authhandler = urllib2.HTTPBasicAuthHandler(passman) | 88 | authhandler = HTTPBasicAuthHandler(passman) |
3098 | 56 | opener = urllib2.build_opener(authhandler) | 89 | opener = build_opener(authhandler) |
3099 | 57 | urllib2.install_opener(opener) | 90 | install_opener(opener) |
3100 | 58 | response = urllib2.urlopen(source) | 91 | response = urlopen(source) |
3101 | 59 | try: | 92 | try: |
3102 | 60 | with open(dest, 'w') as dest_file: | 93 | with open(dest, 'w') as dest_file: |
3103 | 61 | dest_file.write(response.read()) | 94 | dest_file.write(response.read()) |
3104 | @@ -91,17 +124,21 @@ | |||
3105 | 91 | url_parts = self.parse_url(source) | 124 | url_parts = self.parse_url(source) |
3106 | 92 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | 125 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
3107 | 93 | if not os.path.exists(dest_dir): | 126 | if not os.path.exists(dest_dir): |
3109 | 94 | mkdir(dest_dir, perms=0755) | 127 | mkdir(dest_dir, perms=0o755) |
3110 | 95 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | 128 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
3111 | 96 | try: | 129 | try: |
3112 | 97 | self.download(source, dld_file) | 130 | self.download(source, dld_file) |
3114 | 98 | except urllib2.URLError as e: | 131 | except URLError as e: |
3115 | 99 | raise UnhandledSource(e.reason) | 132 | raise UnhandledSource(e.reason) |
3116 | 100 | except OSError as e: | 133 | except OSError as e: |
3117 | 101 | raise UnhandledSource(e.strerror) | 134 | raise UnhandledSource(e.strerror) |
3119 | 102 | options = urlparse.parse_qs(url_parts.fragment) | 135 | options = parse_qs(url_parts.fragment) |
3120 | 103 | for key, value in options.items(): | 136 | for key, value in options.items(): |
3122 | 104 | if key in hashlib.algorithms: | 137 | if not six.PY3: |
3123 | 138 | algorithms = hashlib.algorithms | ||
3124 | 139 | else: | ||
3125 | 140 | algorithms = hashlib.algorithms_available | ||
3126 | 141 | if key in algorithms: | ||
3127 | 105 | check_hash(dld_file, value, key) | 142 | check_hash(dld_file, value, key) |
3128 | 106 | if checksum: | 143 | if checksum: |
3129 | 107 | check_hash(dld_file, checksum, hash_type) | 144 | check_hash(dld_file, checksum, hash_type) |
3130 | 108 | 145 | ||
3131 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
3132 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 13:40:39 +0000 | |||
3133 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-05 07:18:23 +0000 | |||
3134 | @@ -5,6 +5,10 @@ | |||
3135 | 5 | ) | 5 | ) |
3136 | 6 | from charmhelpers.core.host import mkdir | 6 | from charmhelpers.core.host import mkdir |
3137 | 7 | 7 | ||
3138 | 8 | import six | ||
3139 | 9 | if six.PY3: | ||
3140 | 10 | raise ImportError('bzrlib does not support Python3') | ||
3141 | 11 | |||
3142 | 8 | try: | 12 | try: |
3143 | 9 | from bzrlib.branch import Branch | 13 | from bzrlib.branch import Branch |
3144 | 10 | except ImportError: | 14 | except ImportError: |
3145 | @@ -42,7 +46,7 @@ | |||
3146 | 42 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | 46 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
3147 | 43 | branch_name) | 47 | branch_name) |
3148 | 44 | if not os.path.exists(dest_dir): | 48 | if not os.path.exists(dest_dir): |
3150 | 45 | mkdir(dest_dir, perms=0755) | 49 | mkdir(dest_dir, perms=0o755) |
3151 | 46 | try: | 50 | try: |
3152 | 47 | self.branch(source, dest_dir) | 51 | self.branch(source, dest_dir) |
3153 | 48 | except OSError as e: | 52 | except OSError as e: |
3154 | 49 | 53 | ||
3155 | === modified file 'hooks/quantum_contexts.py' | |||
3156 | --- hooks/quantum_contexts.py 2014-11-24 09:34:05 +0000 | |||
3157 | +++ hooks/quantum_contexts.py 2014-12-05 07:18:23 +0000 | |||
3158 | @@ -111,8 +111,8 @@ | |||
3159 | 111 | ''' | 111 | ''' |
3160 | 112 | neutron_settings = { | 112 | neutron_settings = { |
3161 | 113 | 'l2_population': False, | 113 | 'l2_population': False, |
3162 | 114 | 'network_device_mtu': 1500, | ||
3163 | 114 | 'overlay_network_type': 'gre', | 115 | 'overlay_network_type': 'gre', |
3164 | 115 | |||
3165 | 116 | } | 116 | } |
3166 | 117 | for rid in relation_ids('neutron-plugin-api'): | 117 | for rid in relation_ids('neutron-plugin-api'): |
3167 | 118 | for unit in related_units(rid): | 118 | for unit in related_units(rid): |
3168 | @@ -122,6 +122,7 @@ | |||
3169 | 122 | neutron_settings = { | 122 | neutron_settings = { |
3170 | 123 | 'l2_population': rdata['l2-population'], | 123 | 'l2_population': rdata['l2-population'], |
3171 | 124 | 'overlay_network_type': rdata['overlay-network-type'], | 124 | 'overlay_network_type': rdata['overlay-network-type'], |
3172 | 125 | 'network_device_mtu': rdata['network-device-mtu'], | ||
3173 | 125 | } | 126 | } |
3174 | 126 | return neutron_settings | 127 | return neutron_settings |
3175 | 127 | return neutron_settings | 128 | return neutron_settings |
3176 | @@ -243,6 +244,7 @@ | |||
3177 | 243 | 'verbose': config('verbose'), | 244 | 'verbose': config('verbose'), |
3178 | 244 | 'instance_mtu': config('instance-mtu'), | 245 | 'instance_mtu': config('instance-mtu'), |
3179 | 245 | 'l2_population': neutron_api_settings['l2_population'], | 246 | 'l2_population': neutron_api_settings['l2_population'], |
3180 | 247 | 'network_device_mtu': neutron_api_settings['network_device_mtu'], | ||
3181 | 246 | 'overlay_network_type': | 248 | 'overlay_network_type': |
3182 | 247 | neutron_api_settings['overlay_network_type'], | 249 | neutron_api_settings['overlay_network_type'], |
3183 | 248 | } | 250 | } |
3184 | 249 | 251 | ||
3185 | === modified file 'hooks/quantum_hooks.py' | |||
3186 | --- hooks/quantum_hooks.py 2014-11-19 03:09:34 +0000 | |||
3187 | +++ hooks/quantum_hooks.py 2014-12-05 07:18:23 +0000 | |||
3188 | @@ -22,6 +22,9 @@ | |||
3189 | 22 | restart_on_change, | 22 | restart_on_change, |
3190 | 23 | lsb_release, | 23 | lsb_release, |
3191 | 24 | ) | 24 | ) |
3192 | 25 | from charmhelpers.contrib.network.ip import ( | ||
3193 | 26 | configure_phy_nic_mtu | ||
3194 | 27 | ) | ||
3195 | 25 | from charmhelpers.contrib.hahelpers.cluster import( | 28 | from charmhelpers.contrib.hahelpers.cluster import( |
3196 | 26 | eligible_leader | 29 | eligible_leader |
3197 | 27 | ) | 30 | ) |
3198 | @@ -66,6 +69,7 @@ | |||
3199 | 66 | fatal=True) | 69 | fatal=True) |
3200 | 67 | apt_install(filter_installed_packages(get_packages()), | 70 | apt_install(filter_installed_packages(get_packages()), |
3201 | 68 | fatal=True) | 71 | fatal=True) |
3202 | 72 | configure_phy_nic_mtu() | ||
3203 | 69 | else: | 73 | else: |
3204 | 70 | log('Please provide a valid plugin config', level=ERROR) | 74 | log('Please provide a valid plugin config', level=ERROR) |
3205 | 71 | sys.exit(1) | 75 | sys.exit(1) |
3206 | @@ -89,6 +93,7 @@ | |||
3207 | 89 | if valid_plugin(): | 93 | if valid_plugin(): |
3208 | 90 | CONFIGS.write_all() | 94 | CONFIGS.write_all() |
3209 | 91 | configure_ovs() | 95 | configure_ovs() |
3210 | 96 | configure_phy_nic_mtu() | ||
3211 | 92 | else: | 97 | else: |
3212 | 93 | log('Please provide a valid plugin config', level=ERROR) | 98 | log('Please provide a valid plugin config', level=ERROR) |
3213 | 94 | sys.exit(1) | 99 | sys.exit(1) |
3214 | 95 | 100 | ||
3215 | === modified file 'hooks/quantum_utils.py' | |||
3216 | --- hooks/quantum_utils.py 2014-11-24 09:34:05 +0000 | |||
3217 | +++ hooks/quantum_utils.py 2014-12-05 07:18:23 +0000 | |||
3218 | @@ -2,7 +2,7 @@ | |||
3219 | 2 | service_running, | 2 | service_running, |
3220 | 3 | service_stop, | 3 | service_stop, |
3221 | 4 | service_restart, | 4 | service_restart, |
3223 | 5 | lsb_release | 5 | lsb_release, |
3224 | 6 | ) | 6 | ) |
3225 | 7 | from charmhelpers.core.hookenv import ( | 7 | from charmhelpers.core.hookenv import ( |
3226 | 8 | log, | 8 | log, |
3227 | 9 | 9 | ||
3228 | === modified file 'templates/icehouse/neutron.conf' | |||
3229 | --- templates/icehouse/neutron.conf 2014-06-11 09:30:31 +0000 | |||
3230 | +++ templates/icehouse/neutron.conf 2014-12-05 07:18:23 +0000 | |||
3231 | @@ -11,5 +11,6 @@ | |||
3232 | 11 | control_exchange = neutron | 11 | control_exchange = neutron |
3233 | 12 | notification_driver = neutron.openstack.common.notifier.list_notifier | 12 | notification_driver = neutron.openstack.common.notifier.list_notifier |
3234 | 13 | list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier | 13 | list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier |
3235 | 14 | network_device_mtu = {{ network_device_mtu }} | ||
3236 | 14 | [agent] | 15 | [agent] |
3237 | 15 | root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf | 16 | root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf |
3238 | 16 | \ No newline at end of file | 17 | \ No newline at end of file |
3239 | 17 | 18 | ||
3240 | === modified file 'unit_tests/test_quantum_contexts.py' | |||
3241 | --- unit_tests/test_quantum_contexts.py 2014-11-24 09:34:05 +0000 | |||
3242 | +++ unit_tests/test_quantum_contexts.py 2014-12-05 07:18:23 +0000 | |||
3243 | @@ -46,48 +46,12 @@ | |||
3244 | 46 | yield mock_open, mock_file | 46 | yield mock_open, mock_file |
3245 | 47 | 47 | ||
3246 | 48 | 48 | ||
3248 | 49 | class _TestQuantumContext(CharmTestCase): | 49 | class TestNetworkServiceContext(CharmTestCase): |
3249 | 50 | 50 | ||
3250 | 51 | def setUp(self): | 51 | def setUp(self): |
3252 | 52 | super(_TestQuantumContext, self).setUp(quantum_contexts, TO_PATCH) | 52 | super(TestNetworkServiceContext, self).setUp(quantum_contexts, |
3253 | 53 | TO_PATCH) | ||
3254 | 53 | self.config.side_effect = self.test_config.get | 54 | self.config.side_effect = self.test_config.get |
3255 | 54 | |||
3256 | 55 | def test_not_related(self): | ||
3257 | 56 | self.relation_ids.return_value = [] | ||
3258 | 57 | self.assertEquals(self.context(), {}) | ||
3259 | 58 | |||
3260 | 59 | def test_no_units(self): | ||
3261 | 60 | self.relation_ids.return_value = [] | ||
3262 | 61 | self.relation_ids.return_value = ['foo'] | ||
3263 | 62 | self.related_units.return_value = [] | ||
3264 | 63 | self.assertEquals(self.context(), {}) | ||
3265 | 64 | |||
3266 | 65 | def test_no_data(self): | ||
3267 | 66 | self.relation_ids.return_value = ['foo'] | ||
3268 | 67 | self.related_units.return_value = ['bar'] | ||
3269 | 68 | self.relation_get.side_effect = self.test_relation.get | ||
3270 | 69 | self.context_complete.return_value = False | ||
3271 | 70 | self.assertEquals(self.context(), {}) | ||
3272 | 71 | |||
3273 | 72 | def test_data_multi_unit(self): | ||
3274 | 73 | self.relation_ids.return_value = ['foo'] | ||
3275 | 74 | self.related_units.return_value = ['bar', 'baz'] | ||
3276 | 75 | self.context_complete.return_value = True | ||
3277 | 76 | self.relation_get.side_effect = self.test_relation.get | ||
3278 | 77 | self.assertEquals(self.context(), self.data_result) | ||
3279 | 78 | |||
3280 | 79 | def test_data_single_unit(self): | ||
3281 | 80 | self.relation_ids.return_value = ['foo'] | ||
3282 | 81 | self.related_units.return_value = ['bar'] | ||
3283 | 82 | self.context_complete.return_value = True | ||
3284 | 83 | self.relation_get.side_effect = self.test_relation.get | ||
3285 | 84 | self.assertEquals(self.context(), self.data_result) | ||
3286 | 85 | |||
3287 | 86 | |||
3288 | 87 | class TestNetworkServiceContext(_TestQuantumContext): | ||
3289 | 88 | |||
3290 | 89 | def setUp(self): | ||
3291 | 90 | super(TestNetworkServiceContext, self).setUp() | ||
3292 | 91 | self.context = quantum_contexts.NetworkServiceContext() | 55 | self.context = quantum_contexts.NetworkServiceContext() |
3293 | 92 | self.test_relation.set( | 56 | self.test_relation.set( |
3294 | 93 | {'keystone_host': '10.5.0.1', | 57 | {'keystone_host': '10.5.0.1', |
3295 | @@ -116,6 +80,37 @@ | |||
3296 | 116 | 'auth_protocol': 'http', | 80 | 'auth_protocol': 'http', |
3297 | 117 | } | 81 | } |
3298 | 118 | 82 | ||
3299 | 83 | def test_not_related(self): | ||
3300 | 84 | self.relation_ids.return_value = [] | ||
3301 | 85 | self.assertEquals(self.context(), {}) | ||
3302 | 86 | |||
3303 | 87 | def test_no_units(self): | ||
3304 | 88 | self.relation_ids.return_value = [] | ||
3305 | 89 | self.relation_ids.return_value = ['foo'] | ||
3306 | 90 | self.related_units.return_value = [] | ||
3307 | 91 | self.assertEquals(self.context(), {}) | ||
3308 | 92 | |||
3309 | 93 | def test_no_data(self): | ||
3310 | 94 | self.relation_ids.return_value = ['foo'] | ||
3311 | 95 | self.related_units.return_value = ['bar'] | ||
3312 | 96 | self.relation_get.side_effect = self.test_relation.get | ||
3313 | 97 | self.context_complete.return_value = False | ||
3314 | 98 | self.assertEquals(self.context(), {}) | ||
3315 | 99 | |||
3316 | 100 | def test_data_multi_unit(self): | ||
3317 | 101 | self.relation_ids.return_value = ['foo'] | ||
3318 | 102 | self.related_units.return_value = ['bar', 'baz'] | ||
3319 | 103 | self.context_complete.return_value = True | ||
3320 | 104 | self.relation_get.side_effect = self.test_relation.get | ||
3321 | 105 | self.assertEquals(self.context(), self.data_result) | ||
3322 | 106 | |||
3323 | 107 | def test_data_single_unit(self): | ||
3324 | 108 | self.relation_ids.return_value = ['foo'] | ||
3325 | 109 | self.related_units.return_value = ['bar'] | ||
3326 | 110 | self.context_complete.return_value = True | ||
3327 | 111 | self.relation_get.side_effect = self.test_relation.get | ||
3328 | 112 | self.assertEquals(self.context(), self.data_result) | ||
3329 | 113 | |||
3330 | 119 | 114 | ||
3331 | 120 | class TestNeutronPortContext(CharmTestCase): | 115 | class TestNeutronPortContext(CharmTestCase): |
3332 | 121 | 116 | ||
3333 | @@ -241,6 +236,7 @@ | |||
3334 | 241 | 'debug': False, | 236 | 'debug': False, |
3335 | 242 | 'verbose': True, | 237 | 'verbose': True, |
3336 | 243 | 'l2_population': False, | 238 | 'l2_population': False, |
3337 | 239 | 'network_device_mtu': 1500, | ||
3338 | 244 | 'overlay_network_type': 'gre', | 240 | 'overlay_network_type': 'gre', |
3339 | 245 | }) | 241 | }) |
3340 | 246 | 242 | ||
3341 | @@ -367,24 +363,29 @@ | |||
3342 | 367 | self.relation_ids.return_value = ['foo'] | 363 | self.relation_ids.return_value = ['foo'] |
3343 | 368 | self.related_units.return_value = ['bar'] | 364 | self.related_units.return_value = ['bar'] |
3344 | 369 | self.test_relation.set({'l2-population': True, | 365 | self.test_relation.set({'l2-population': True, |
3345 | 366 | 'network-device-mtu': 1500, | ||
3346 | 370 | 'overlay-network-type': 'gre', }) | 367 | 'overlay-network-type': 'gre', }) |
3347 | 371 | self.relation_get.side_effect = self.test_relation.get | 368 | self.relation_get.side_effect = self.test_relation.get |
3348 | 372 | self.assertEquals(quantum_contexts._neutron_api_settings(), | 369 | self.assertEquals(quantum_contexts._neutron_api_settings(), |
3349 | 373 | {'l2_population': True, | 370 | {'l2_population': True, |
3350 | 371 | 'network_device_mtu': 1500, | ||
3351 | 374 | 'overlay_network_type': 'gre'}) | 372 | 'overlay_network_type': 'gre'}) |
3352 | 375 | 373 | ||
3353 | 376 | def test_neutron_api_settings2(self): | 374 | def test_neutron_api_settings2(self): |
3354 | 377 | self.relation_ids.return_value = ['foo'] | 375 | self.relation_ids.return_value = ['foo'] |
3355 | 378 | self.related_units.return_value = ['bar'] | 376 | self.related_units.return_value = ['bar'] |
3356 | 379 | self.test_relation.set({'l2-population': True, | 377 | self.test_relation.set({'l2-population': True, |
3357 | 378 | 'network-device-mtu': 1500, | ||
3358 | 380 | 'overlay-network-type': 'gre', }) | 379 | 'overlay-network-type': 'gre', }) |
3359 | 381 | self.relation_get.side_effect = self.test_relation.get | 380 | self.relation_get.side_effect = self.test_relation.get |
3360 | 382 | self.assertEquals(quantum_contexts._neutron_api_settings(), | 381 | self.assertEquals(quantum_contexts._neutron_api_settings(), |
3361 | 383 | {'l2_population': True, | 382 | {'l2_population': True, |
3362 | 383 | 'network_device_mtu': 1500, | ||
3363 | 384 | 'overlay_network_type': 'gre'}) | 384 | 'overlay_network_type': 'gre'}) |
3364 | 385 | 385 | ||
3365 | 386 | def test_neutron_api_settings_no_apiplugin(self): | 386 | def test_neutron_api_settings_no_apiplugin(self): |
3366 | 387 | self.relation_ids.return_value = [] | 387 | self.relation_ids.return_value = [] |
3367 | 388 | self.assertEquals(quantum_contexts._neutron_api_settings(), | 388 | self.assertEquals(quantum_contexts._neutron_api_settings(), |
3368 | 389 | {'l2_population': False, | 389 | {'l2_population': False, |
3369 | 390 | 'network_device_mtu': 1500, | ||
3370 | 390 | 'overlay_network_type': 'gre', }) | 391 | 'overlay_network_type': 'gre', }) |
3371 | 391 | 392 | ||
3372 | === modified file 'unit_tests/test_quantum_hooks.py' | |||
3373 | --- unit_tests/test_quantum_hooks.py 2014-11-19 03:09:34 +0000 | |||
3374 | +++ unit_tests/test_quantum_hooks.py 2014-12-05 07:18:23 +0000 | |||
3375 | @@ -40,7 +40,8 @@ | |||
3376 | 40 | 'lsb_release', | 40 | 'lsb_release', |
3377 | 41 | 'stop_services', | 41 | 'stop_services', |
3378 | 42 | 'b64decode', | 42 | 'b64decode', |
3380 | 43 | 'is_relation_made' | 43 | 'is_relation_made', |
3381 | 44 | 'configure_phy_nic_mtu' | ||
3382 | 44 | ] | 45 | ] |
3383 | 45 | 46 | ||
3384 | 46 | 47 | ||
3385 | @@ -80,6 +81,7 @@ | |||
3386 | 80 | self.assertTrue(self.get_early_packages.called) | 81 | self.assertTrue(self.get_early_packages.called) |
3387 | 81 | self.assertTrue(self.get_packages.called) | 82 | self.assertTrue(self.get_packages.called) |
3388 | 82 | self.assertTrue(self.execd_preinstall.called) | 83 | self.assertTrue(self.execd_preinstall.called) |
3389 | 84 | self.assertTrue(self.configure_phy_nic_mtu.called) | ||
3390 | 83 | 85 | ||
3391 | 84 | def test_install_hook_precise_nocloudarchive(self): | 86 | def test_install_hook_precise_nocloudarchive(self): |
3392 | 85 | self.test_config.set('openstack-origin', 'distro') | 87 | self.test_config.set('openstack-origin', 'distro') |
3393 | @@ -112,6 +114,7 @@ | |||
3394 | 112 | self.assertTrue(_pgsql_db_joined.called) | 114 | self.assertTrue(_pgsql_db_joined.called) |
3395 | 113 | self.assertTrue(_amqp_joined.called) | 115 | self.assertTrue(_amqp_joined.called) |
3396 | 114 | self.assertTrue(_amqp_nova_joined.called) | 116 | self.assertTrue(_amqp_nova_joined.called) |
3397 | 117 | self.assertTrue(self.configure_phy_nic_mtu.called) | ||
3398 | 115 | 118 | ||
3399 | 116 | def test_config_changed_upgrade(self): | 119 | def test_config_changed_upgrade(self): |
3400 | 117 | self.openstack_upgrade_available.return_value = True | 120 | self.openstack_upgrade_available.return_value = True |
3401 | @@ -119,6 +122,7 @@ | |||
3402 | 119 | self._call_hook('config-changed') | 122 | self._call_hook('config-changed') |
3403 | 120 | self.assertTrue(self.do_openstack_upgrade.called) | 123 | self.assertTrue(self.do_openstack_upgrade.called) |
3404 | 121 | self.assertTrue(self.configure_ovs.called) | 124 | self.assertTrue(self.configure_ovs.called) |
3405 | 125 | self.assertTrue(self.configure_phy_nic_mtu.called) | ||
3406 | 122 | 126 | ||
3407 | 123 | def test_config_changed_n1kv(self): | 127 | def test_config_changed_n1kv(self): |
3408 | 124 | self.openstack_upgrade_available.return_value = False | 128 | self.openstack_upgrade_available.return_value = False |
UOSCI bot says: gateway- next for zhhuabj mp242612
charm_unit_test #1048 quantum-
UNIT OK: passed
UNIT Results (max last 5 lines): quantum_ hooks 106 2 98% 199-201 quantum_ utils 214 11 95% 394, 581-590
hooks/
hooks/
TOTAL 451 18 96%
Ran 83 tests in 3.175s
OK
Full unit test output: http:// paste.ubuntu. com/9206999/ 10.98.191. 181:8080/ job/charm_ unit_test/ 1048/
Build: http://