Merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646 into lp:~openstack-charmers-archive/charms/trusty/nova-compute/next
- Trusty Tahr (14.04)
- lp74646
- Merge into next
Status: | Superseded |
---|---|
Proposed branch: | lp:~zhhuabj/charms/trusty/nova-compute/lp74646 |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/nova-compute/next |
Diff against target: |
3776 lines (+1430/-528) 35 files modified
charm-helpers-hooks.yaml (+1/-0) config.yaml (+6/-0) hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7) hooks/charmhelpers/contrib/network/ip.py (+90/-54) hooks/charmhelpers/contrib/network/ufw.py (+182/-0) hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1) hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1) hooks/charmhelpers/contrib/openstack/context.py (+339/-225) hooks/charmhelpers/contrib/openstack/ip.py (+41/-27) hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4) hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2) hooks/charmhelpers/contrib/openstack/templating.py (+5/-5) hooks/charmhelpers/contrib/openstack/utils.py (+146/-13) hooks/charmhelpers/contrib/python/debug.py (+40/-0) hooks/charmhelpers/contrib/python/packages.py (+77/-0) hooks/charmhelpers/contrib/python/rpdb.py (+42/-0) hooks/charmhelpers/contrib/python/version.py (+18/-0) hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102) hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4) hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0) hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2) hooks/charmhelpers/core/fstab.py (+10/-8) hooks/charmhelpers/core/hookenv.py (+27/-11) hooks/charmhelpers/core/host.py (+81/-21) hooks/charmhelpers/core/services/__init__.py (+2/-2) hooks/charmhelpers/core/services/helpers.py (+9/-5) hooks/charmhelpers/core/sysctl.py (+34/-0) hooks/charmhelpers/core/templating.py (+2/-1) hooks/charmhelpers/fetch/__init__.py (+18/-12) hooks/charmhelpers/fetch/archiveurl.py (+53/-16) hooks/charmhelpers/fetch/bzrurl.py (+5/-1) hooks/charmhelpers/fetch/giturl.py (+51/-0) hooks/nova_compute_hooks.py (+6/-2) hooks/nova_compute_utils.py (+1/-1) unit_tests/test_nova_compute_hooks.py (+4/-1) |
To merge this branch: | bzr merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Edward Hope-Morley | Pending | ||
Xiang Hui | Pending | ||
Review via email: mp+243979@code.launchpad.net |
This proposal supersedes a proposal from 2014-11-24.
Commit message
Description of the change
This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
juju set nova-compute phy-nic-mtu=1546
juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-
juju set neutron-api network-
Limitation:
a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_
b, for gre and vxlan, this step is optional.
c, after setting network-
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
juju set neutron-gateway instance-mtu=1500
Limitation:
a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://
b, doesn't support MTU per network
NOTE: maybe we can't test this feature in bastion
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #1047 nova-compute-next for zhhuabj mp242611
UNIT OK: passed
UNIT Results (max last 5 lines):
nova_
nova_
TOTAL 578 203 65%
Ran 56 tests in 3.028s
OK (SKIP=5)
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #517 nova-compute-next for zhhuabj mp242611
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
subprocess.
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 1 passed, 2 failed, 0 errored
ERROR subprocess encountered error code 2
make: *** [test] Error 2
Full amulet test output: http://
Build: http://
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal | # |
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_lint_check #1312 nova-compute-next for zhhuabj mp242611
LINT OK: passed
LINT Results (max last 5 lines):
I: config.yaml: option os-data-network has no default value
I: config.yaml: option config-flags has no default value
I: config.yaml: option instances-path has no default value
W: config.yaml: option disable-
I: config.yaml: option migration-auth-type has no default value
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_unit_test #1146 nova-compute-next for zhhuabj mp242611
UNIT OK: passed
UNIT Results (max last 5 lines):
nova_
nova_
TOTAL 566 193 66%
Ran 56 tests in 3.216s
OK (SKIP=5)
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
UOSCI bot says:
charm_amulet_test #578 nova-compute-next for zhhuabj mp242611
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
juju-test INFO : Results: 2 passed, 1 failed, 0 errored
ERROR subprocess encountered error code 1
make: *** [test] Error 1
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_lint_check #94 nova-compute-next for zhhuabj mp243979
LINT OK: passed
LINT Results (max last 5 lines):
I: config.yaml: option os-data-network has no default value
I: config.yaml: option config-flags has no default value
I: config.yaml: option instances-path has no default value
W: config.yaml: option disable-
I: config.yaml: option migration-auth-type has no default value
Full lint test output: pastebin not avail., cmd error
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_unit_test #94 nova-compute-next for zhhuabj mp243979
UNIT FAIL: unit-test failed
UNIT Results (max last 5 lines):
ImportError: No module named python.packages
Name Stmts Miss Cover Missing
Ran 3 tests in 0.002s
FAILED (errors=3)
make: *** [unit_test] Error 1
Full unit test output: pastebin not avail., cmd error
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : | # |
UOSCI bot says:
charm_amulet_test #54 nova-compute-next for zhhuabj mp243979
AMULET FAIL: amulet-test failed
AMULET Results (max last 5 lines):
juju-
juju-
juju-test INFO : Results: 1 passed, 2 failed, 0 errored
ERROR subprocess encountered error code 2
make: *** [test] Error 2
Full amulet test output: pastebin not avail., cmd error
Build: http://
Unmerged revisions
Preview Diff
1 | === modified file 'charm-helpers-hooks.yaml' | |||
2 | --- charm-helpers-hooks.yaml 2014-09-23 10:21:54 +0000 | |||
3 | +++ charm-helpers-hooks.yaml 2014-12-10 07:57:54 +0000 | |||
4 | @@ -9,4 +9,5 @@ | |||
5 | 9 | - apache | 9 | - apache |
6 | 10 | - cluster | 10 | - cluster |
7 | 11 | - contrib.network | 11 | - contrib.network |
8 | 12 | - contrib.python | ||
9 | 12 | - payload.execd | 13 | - payload.execd |
10 | 13 | 14 | ||
11 | === modified file 'config.yaml' | |||
12 | --- config.yaml 2014-11-28 12:54:57 +0000 | |||
13 | +++ config.yaml 2014-12-10 07:57:54 +0000 | |||
14 | @@ -154,3 +154,9 @@ | |||
15 | 154 | order for this charm to function correctly, the privacy extension must be | 154 | order for this charm to function correctly, the privacy extension must be |
16 | 155 | disabled and a non-temporary address must be configured/available on | 155 | disabled and a non-temporary address must be configured/available on |
17 | 156 | your network interface. | 156 | your network interface. |
18 | 157 | phy-nic-mtu: | ||
19 | 158 | type: int | ||
20 | 159 | default: 1500 | ||
21 | 160 | description: | | ||
22 | 161 | To improve network performance of VM, sometimes we should keep VM MTU as 1500 | ||
23 | 162 | and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE) | ||
24 | 157 | 163 | ||
25 | === modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py' | |||
26 | --- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-06 21:57:43 +0000 | |||
27 | +++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-10 07:57:54 +0000 | |||
28 | @@ -13,9 +13,10 @@ | |||
29 | 13 | 13 | ||
30 | 14 | import subprocess | 14 | import subprocess |
31 | 15 | import os | 15 | import os |
32 | 16 | |||
33 | 17 | from socket import gethostname as get_unit_hostname | 16 | from socket import gethostname as get_unit_hostname |
34 | 18 | 17 | ||
35 | 18 | import six | ||
36 | 19 | |||
37 | 19 | from charmhelpers.core.hookenv import ( | 20 | from charmhelpers.core.hookenv import ( |
38 | 20 | log, | 21 | log, |
39 | 21 | relation_ids, | 22 | relation_ids, |
40 | @@ -77,7 +78,7 @@ | |||
41 | 77 | "show", resource | 78 | "show", resource |
42 | 78 | ] | 79 | ] |
43 | 79 | try: | 80 | try: |
45 | 80 | status = subprocess.check_output(cmd) | 81 | status = subprocess.check_output(cmd).decode('UTF-8') |
46 | 81 | except subprocess.CalledProcessError: | 82 | except subprocess.CalledProcessError: |
47 | 82 | return False | 83 | return False |
48 | 83 | else: | 84 | else: |
49 | @@ -150,34 +151,42 @@ | |||
50 | 150 | return False | 151 | return False |
51 | 151 | 152 | ||
52 | 152 | 153 | ||
54 | 153 | def determine_api_port(public_port): | 154 | def determine_api_port(public_port, singlenode_mode=False): |
55 | 154 | ''' | 155 | ''' |
56 | 155 | Determine correct API server listening port based on | 156 | Determine correct API server listening port based on |
57 | 156 | existence of HTTPS reverse proxy and/or haproxy. | 157 | existence of HTTPS reverse proxy and/or haproxy. |
58 | 157 | 158 | ||
59 | 158 | public_port: int: standard public port for given service | 159 | public_port: int: standard public port for given service |
60 | 159 | 160 | ||
61 | 161 | singlenode_mode: boolean: Shuffle ports when only a single unit is present | ||
62 | 162 | |||
63 | 160 | returns: int: the correct listening port for the API service | 163 | returns: int: the correct listening port for the API service |
64 | 161 | ''' | 164 | ''' |
65 | 162 | i = 0 | 165 | i = 0 |
67 | 163 | if len(peer_units()) > 0 or is_clustered(): | 166 | if singlenode_mode: |
68 | 167 | i += 1 | ||
69 | 168 | elif len(peer_units()) > 0 or is_clustered(): | ||
70 | 164 | i += 1 | 169 | i += 1 |
71 | 165 | if https(): | 170 | if https(): |
72 | 166 | i += 1 | 171 | i += 1 |
73 | 167 | return public_port - (i * 10) | 172 | return public_port - (i * 10) |
74 | 168 | 173 | ||
75 | 169 | 174 | ||
77 | 170 | def determine_apache_port(public_port): | 175 | def determine_apache_port(public_port, singlenode_mode=False): |
78 | 171 | ''' | 176 | ''' |
79 | 172 | Description: Determine correct apache listening port based on public IP + | 177 | Description: Determine correct apache listening port based on public IP + |
80 | 173 | state of the cluster. | 178 | state of the cluster. |
81 | 174 | 179 | ||
82 | 175 | public_port: int: standard public port for given service | 180 | public_port: int: standard public port for given service |
83 | 176 | 181 | ||
84 | 182 | singlenode_mode: boolean: Shuffle ports when only a single unit is present | ||
85 | 183 | |||
86 | 177 | returns: int: the correct listening port for the HAProxy service | 184 | returns: int: the correct listening port for the HAProxy service |
87 | 178 | ''' | 185 | ''' |
88 | 179 | i = 0 | 186 | i = 0 |
90 | 180 | if len(peer_units()) > 0 or is_clustered(): | 187 | if singlenode_mode: |
91 | 188 | i += 1 | ||
92 | 189 | elif len(peer_units()) > 0 or is_clustered(): | ||
93 | 181 | i += 1 | 190 | i += 1 |
94 | 182 | return public_port - (i * 10) | 191 | return public_port - (i * 10) |
95 | 183 | 192 | ||
96 | @@ -197,7 +206,7 @@ | |||
97 | 197 | for setting in settings: | 206 | for setting in settings: |
98 | 198 | conf[setting] = config_get(setting) | 207 | conf[setting] = config_get(setting) |
99 | 199 | missing = [] | 208 | missing = [] |
101 | 200 | [missing.append(s) for s, v in conf.iteritems() if v is None] | 209 | [missing.append(s) for s, v in six.iteritems(conf) if v is None] |
102 | 201 | if missing: | 210 | if missing: |
103 | 202 | log('Insufficient config data to configure hacluster.', level=ERROR) | 211 | log('Insufficient config data to configure hacluster.', level=ERROR) |
104 | 203 | raise HAIncompleteConfig | 212 | raise HAIncompleteConfig |
105 | 204 | 213 | ||
106 | === modified file 'hooks/charmhelpers/contrib/network/ip.py' | |||
107 | --- hooks/charmhelpers/contrib/network/ip.py 2014-10-06 21:57:43 +0000 | |||
108 | +++ hooks/charmhelpers/contrib/network/ip.py 2014-12-10 07:57:54 +0000 | |||
109 | @@ -1,16 +1,20 @@ | |||
110 | 1 | import glob | 1 | import glob |
111 | 2 | import re | 2 | import re |
112 | 3 | import subprocess | 3 | import subprocess |
113 | 4 | import sys | ||
114 | 5 | 4 | ||
115 | 6 | from functools import partial | 5 | from functools import partial |
116 | 7 | 6 | ||
117 | 8 | from charmhelpers.core.hookenv import unit_get | 7 | from charmhelpers.core.hookenv import unit_get |
118 | 9 | from charmhelpers.fetch import apt_install | 8 | from charmhelpers.fetch import apt_install |
119 | 10 | from charmhelpers.core.hookenv import ( | 9 | from charmhelpers.core.hookenv import ( |
123 | 11 | WARNING, | 10 | config, |
124 | 12 | ERROR, | 11 | log, |
125 | 13 | log | 12 | INFO |
126 | 13 | ) | ||
127 | 14 | from charmhelpers.core.host import ( | ||
128 | 15 | list_nics, | ||
129 | 16 | get_nic_mtu, | ||
130 | 17 | set_nic_mtu | ||
131 | 14 | ) | 18 | ) |
132 | 15 | 19 | ||
133 | 16 | try: | 20 | try: |
134 | @@ -34,31 +38,28 @@ | |||
135 | 34 | network) | 38 | network) |
136 | 35 | 39 | ||
137 | 36 | 40 | ||
138 | 41 | def no_ip_found_error_out(network): | ||
139 | 42 | errmsg = ("No IP address found in network: %s" % network) | ||
140 | 43 | raise ValueError(errmsg) | ||
141 | 44 | |||
142 | 45 | |||
143 | 37 | def get_address_in_network(network, fallback=None, fatal=False): | 46 | def get_address_in_network(network, fallback=None, fatal=False): |
146 | 38 | """ | 47 | """Get an IPv4 or IPv6 address within the network from the host. |
145 | 39 | Get an IPv4 or IPv6 address within the network from the host. | ||
147 | 40 | 48 | ||
148 | 41 | :param network (str): CIDR presentation format. For example, | 49 | :param network (str): CIDR presentation format. For example, |
149 | 42 | '192.168.1.0/24'. | 50 | '192.168.1.0/24'. |
150 | 43 | :param fallback (str): If no address is found, return fallback. | 51 | :param fallback (str): If no address is found, return fallback. |
151 | 44 | :param fatal (boolean): If no address is found, fallback is not | 52 | :param fatal (boolean): If no address is found, fallback is not |
152 | 45 | set and fatal is True then exit(1). | 53 | set and fatal is True then exit(1). |
153 | 46 | |||
154 | 47 | """ | 54 | """ |
155 | 48 | |||
156 | 49 | def not_found_error_out(): | ||
157 | 50 | log("No IP address found in network: %s" % network, | ||
158 | 51 | level=ERROR) | ||
159 | 52 | sys.exit(1) | ||
160 | 53 | |||
161 | 54 | if network is None: | 55 | if network is None: |
162 | 55 | if fallback is not None: | 56 | if fallback is not None: |
163 | 56 | return fallback | 57 | return fallback |
164 | 58 | |||
165 | 59 | if fatal: | ||
166 | 60 | no_ip_found_error_out(network) | ||
167 | 57 | else: | 61 | else: |
172 | 58 | if fatal: | 62 | return None |
169 | 59 | not_found_error_out() | ||
170 | 60 | else: | ||
171 | 61 | return None | ||
173 | 62 | 63 | ||
174 | 63 | _validate_cidr(network) | 64 | _validate_cidr(network) |
175 | 64 | network = netaddr.IPNetwork(network) | 65 | network = netaddr.IPNetwork(network) |
176 | @@ -70,6 +71,7 @@ | |||
177 | 70 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) | 71 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
178 | 71 | if cidr in network: | 72 | if cidr in network: |
179 | 72 | return str(cidr.ip) | 73 | return str(cidr.ip) |
180 | 74 | |||
181 | 73 | if network.version == 6 and netifaces.AF_INET6 in addresses: | 75 | if network.version == 6 and netifaces.AF_INET6 in addresses: |
182 | 74 | for addr in addresses[netifaces.AF_INET6]: | 76 | for addr in addresses[netifaces.AF_INET6]: |
183 | 75 | if not addr['addr'].startswith('fe80'): | 77 | if not addr['addr'].startswith('fe80'): |
184 | @@ -82,20 +84,20 @@ | |||
185 | 82 | return fallback | 84 | return fallback |
186 | 83 | 85 | ||
187 | 84 | if fatal: | 86 | if fatal: |
189 | 85 | not_found_error_out() | 87 | no_ip_found_error_out(network) |
190 | 86 | 88 | ||
191 | 87 | return None | 89 | return None |
192 | 88 | 90 | ||
193 | 89 | 91 | ||
194 | 90 | def is_ipv6(address): | 92 | def is_ipv6(address): |
196 | 91 | '''Determine whether provided address is IPv6 or not''' | 93 | """Determine whether provided address is IPv6 or not.""" |
197 | 92 | try: | 94 | try: |
198 | 93 | address = netaddr.IPAddress(address) | 95 | address = netaddr.IPAddress(address) |
199 | 94 | except netaddr.AddrFormatError: | 96 | except netaddr.AddrFormatError: |
200 | 95 | # probably a hostname - so not an address at all! | 97 | # probably a hostname - so not an address at all! |
201 | 96 | return False | 98 | return False |
204 | 97 | else: | 99 | |
205 | 98 | return address.version == 6 | 100 | return address.version == 6 |
206 | 99 | 101 | ||
207 | 100 | 102 | ||
208 | 101 | def is_address_in_network(network, address): | 103 | def is_address_in_network(network, address): |
209 | @@ -113,11 +115,13 @@ | |||
210 | 113 | except (netaddr.core.AddrFormatError, ValueError): | 115 | except (netaddr.core.AddrFormatError, ValueError): |
211 | 114 | raise ValueError("Network (%s) is not in CIDR presentation format" % | 116 | raise ValueError("Network (%s) is not in CIDR presentation format" % |
212 | 115 | network) | 117 | network) |
213 | 118 | |||
214 | 116 | try: | 119 | try: |
215 | 117 | address = netaddr.IPAddress(address) | 120 | address = netaddr.IPAddress(address) |
216 | 118 | except (netaddr.core.AddrFormatError, ValueError): | 121 | except (netaddr.core.AddrFormatError, ValueError): |
217 | 119 | raise ValueError("Address (%s) is not in correct presentation format" % | 122 | raise ValueError("Address (%s) is not in correct presentation format" % |
218 | 120 | address) | 123 | address) |
219 | 124 | |||
220 | 121 | if address in network: | 125 | if address in network: |
221 | 122 | return True | 126 | return True |
222 | 123 | else: | 127 | else: |
223 | @@ -140,57 +144,63 @@ | |||
224 | 140 | if address.version == 4 and netifaces.AF_INET in addresses: | 144 | if address.version == 4 and netifaces.AF_INET in addresses: |
225 | 141 | addr = addresses[netifaces.AF_INET][0]['addr'] | 145 | addr = addresses[netifaces.AF_INET][0]['addr'] |
226 | 142 | netmask = addresses[netifaces.AF_INET][0]['netmask'] | 146 | netmask = addresses[netifaces.AF_INET][0]['netmask'] |
228 | 143 | cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask)) | 147 | network = netaddr.IPNetwork("%s/%s" % (addr, netmask)) |
229 | 148 | cidr = network.cidr | ||
230 | 144 | if address in cidr: | 149 | if address in cidr: |
231 | 145 | if key == 'iface': | 150 | if key == 'iface': |
232 | 146 | return iface | 151 | return iface |
233 | 147 | else: | 152 | else: |
234 | 148 | return addresses[netifaces.AF_INET][0][key] | 153 | return addresses[netifaces.AF_INET][0][key] |
235 | 154 | |||
236 | 149 | if address.version == 6 and netifaces.AF_INET6 in addresses: | 155 | if address.version == 6 and netifaces.AF_INET6 in addresses: |
237 | 150 | for addr in addresses[netifaces.AF_INET6]: | 156 | for addr in addresses[netifaces.AF_INET6]: |
238 | 151 | if not addr['addr'].startswith('fe80'): | 157 | if not addr['addr'].startswith('fe80'): |
241 | 152 | cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'], | 158 | network = netaddr.IPNetwork("%s/%s" % (addr['addr'], |
242 | 153 | addr['netmask'])) | 159 | addr['netmask'])) |
243 | 160 | cidr = network.cidr | ||
244 | 154 | if address in cidr: | 161 | if address in cidr: |
245 | 155 | if key == 'iface': | 162 | if key == 'iface': |
246 | 156 | return iface | 163 | return iface |
247 | 164 | elif key == 'netmask' and cidr: | ||
248 | 165 | return str(cidr).split('/')[1] | ||
249 | 157 | else: | 166 | else: |
250 | 158 | return addr[key] | 167 | return addr[key] |
251 | 168 | |||
252 | 159 | return None | 169 | return None |
253 | 160 | 170 | ||
254 | 161 | 171 | ||
255 | 162 | get_iface_for_address = partial(_get_for_address, key='iface') | 172 | get_iface_for_address = partial(_get_for_address, key='iface') |
256 | 163 | 173 | ||
257 | 174 | |||
258 | 164 | get_netmask_for_address = partial(_get_for_address, key='netmask') | 175 | get_netmask_for_address = partial(_get_for_address, key='netmask') |
259 | 165 | 176 | ||
260 | 166 | 177 | ||
261 | 167 | def format_ipv6_addr(address): | 178 | def format_ipv6_addr(address): |
264 | 168 | """ | 179 | """If address is IPv6, wrap it in '[]' otherwise return None. |
265 | 169 | IPv6 needs to be wrapped with [] in url link to parse correctly. | 180 | |
266 | 181 | This is required by most configuration files when specifying IPv6 | ||
267 | 182 | addresses. | ||
268 | 170 | """ | 183 | """ |
269 | 171 | if is_ipv6(address): | 184 | if is_ipv6(address): |
274 | 172 | address = "[%s]" % address | 185 | return "[%s]" % address |
271 | 173 | else: | ||
272 | 174 | log("Not a valid ipv6 address: %s" % address, level=WARNING) | ||
273 | 175 | address = None | ||
275 | 176 | 186 | ||
277 | 177 | return address | 187 | return None |
278 | 178 | 188 | ||
279 | 179 | 189 | ||
280 | 180 | def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, | 190 | def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False, |
281 | 181 | fatal=True, exc_list=None): | 191 | fatal=True, exc_list=None): |
285 | 182 | """ | 192 | """Return the assigned IP address for a given interface, if any.""" |
283 | 183 | Return the assigned IP address for a given interface, if any, or []. | ||
284 | 184 | """ | ||
286 | 185 | # Extract nic if passed /dev/ethX | 193 | # Extract nic if passed /dev/ethX |
287 | 186 | if '/' in iface: | 194 | if '/' in iface: |
288 | 187 | iface = iface.split('/')[-1] | 195 | iface = iface.split('/')[-1] |
289 | 196 | |||
290 | 188 | if not exc_list: | 197 | if not exc_list: |
291 | 189 | exc_list = [] | 198 | exc_list = [] |
292 | 199 | |||
293 | 190 | try: | 200 | try: |
294 | 191 | inet_num = getattr(netifaces, inet_type) | 201 | inet_num = getattr(netifaces, inet_type) |
295 | 192 | except AttributeError: | 202 | except AttributeError: |
297 | 193 | raise Exception('Unknown inet type ' + str(inet_type)) | 203 | raise Exception("Unknown inet type '%s'" % str(inet_type)) |
298 | 194 | 204 | ||
299 | 195 | interfaces = netifaces.interfaces() | 205 | interfaces = netifaces.interfaces() |
300 | 196 | if inc_aliases: | 206 | if inc_aliases: |
301 | @@ -198,15 +208,18 @@ | |||
302 | 198 | for _iface in interfaces: | 208 | for _iface in interfaces: |
303 | 199 | if iface == _iface or _iface.split(':')[0] == iface: | 209 | if iface == _iface or _iface.split(':')[0] == iface: |
304 | 200 | ifaces.append(_iface) | 210 | ifaces.append(_iface) |
305 | 211 | |||
306 | 201 | if fatal and not ifaces: | 212 | if fatal and not ifaces: |
307 | 202 | raise Exception("Invalid interface '%s'" % iface) | 213 | raise Exception("Invalid interface '%s'" % iface) |
308 | 214 | |||
309 | 203 | ifaces.sort() | 215 | ifaces.sort() |
310 | 204 | else: | 216 | else: |
311 | 205 | if iface not in interfaces: | 217 | if iface not in interfaces: |
312 | 206 | if fatal: | 218 | if fatal: |
314 | 207 | raise Exception("%s not found " % (iface)) | 219 | raise Exception("Interface '%s' not found " % (iface)) |
315 | 208 | else: | 220 | else: |
316 | 209 | return [] | 221 | return [] |
317 | 222 | |||
318 | 210 | else: | 223 | else: |
319 | 211 | ifaces = [iface] | 224 | ifaces = [iface] |
320 | 212 | 225 | ||
321 | @@ -217,10 +230,13 @@ | |||
322 | 217 | for entry in net_info[inet_num]: | 230 | for entry in net_info[inet_num]: |
323 | 218 | if 'addr' in entry and entry['addr'] not in exc_list: | 231 | if 'addr' in entry and entry['addr'] not in exc_list: |
324 | 219 | addresses.append(entry['addr']) | 232 | addresses.append(entry['addr']) |
325 | 233 | |||
326 | 220 | if fatal and not addresses: | 234 | if fatal and not addresses: |
327 | 221 | raise Exception("Interface '%s' doesn't have any %s addresses." % | 235 | raise Exception("Interface '%s' doesn't have any %s addresses." % |
328 | 222 | (iface, inet_type)) | 236 | (iface, inet_type)) |
330 | 223 | return addresses | 237 | |
331 | 238 | return sorted(addresses) | ||
332 | 239 | |||
333 | 224 | 240 | ||
334 | 225 | get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') | 241 | get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET') |
335 | 226 | 242 | ||
336 | @@ -237,6 +253,7 @@ | |||
337 | 237 | raw = re.match(ll_key, _addr) | 253 | raw = re.match(ll_key, _addr) |
338 | 238 | if raw: | 254 | if raw: |
339 | 239 | _addr = raw.group(1) | 255 | _addr = raw.group(1) |
340 | 256 | |||
341 | 240 | if _addr == addr: | 257 | if _addr == addr: |
342 | 241 | log("Address '%s' is configured on iface '%s'" % | 258 | log("Address '%s' is configured on iface '%s'" % |
343 | 242 | (addr, iface)) | 259 | (addr, iface)) |
344 | @@ -247,8 +264,9 @@ | |||
345 | 247 | 264 | ||
346 | 248 | 265 | ||
347 | 249 | def sniff_iface(f): | 266 | def sniff_iface(f): |
350 | 250 | """If no iface provided, inject net iface inferred from unit private | 267 | """Ensure decorated function is called with a value for iface. |
351 | 251 | address. | 268 | |
352 | 269 | If no iface provided, inject net iface inferred from unit private address. | ||
353 | 252 | """ | 270 | """ |
354 | 253 | def iface_sniffer(*args, **kwargs): | 271 | def iface_sniffer(*args, **kwargs): |
355 | 254 | if not kwargs.get('iface', None): | 272 | if not kwargs.get('iface', None): |
356 | @@ -291,7 +309,7 @@ | |||
357 | 291 | if global_addrs: | 309 | if global_addrs: |
358 | 292 | # Make sure any found global addresses are not temporary | 310 | # Make sure any found global addresses are not temporary |
359 | 293 | cmd = ['ip', 'addr', 'show', iface] | 311 | cmd = ['ip', 'addr', 'show', iface] |
361 | 294 | out = subprocess.check_output(cmd) | 312 | out = subprocess.check_output(cmd).decode('UTF-8') |
362 | 295 | if dynamic_only: | 313 | if dynamic_only: |
363 | 296 | key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*") | 314 | key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*") |
364 | 297 | else: | 315 | else: |
365 | @@ -313,33 +331,51 @@ | |||
366 | 313 | return addrs | 331 | return addrs |
367 | 314 | 332 | ||
368 | 315 | if fatal: | 333 | if fatal: |
370 | 316 | raise Exception("Interface '%s' doesn't have a scope global " | 334 | raise Exception("Interface '%s' does not have a scope global " |
371 | 317 | "non-temporary ipv6 address." % iface) | 335 | "non-temporary ipv6 address." % iface) |
372 | 318 | 336 | ||
373 | 319 | return [] | 337 | return [] |
374 | 320 | 338 | ||
375 | 321 | 339 | ||
376 | 322 | def get_bridges(vnic_dir='/sys/devices/virtual/net'): | 340 | def get_bridges(vnic_dir='/sys/devices/virtual/net'): |
382 | 323 | """ | 341 | """Return a list of bridges on the system.""" |
383 | 324 | Return a list of bridges on the system or [] | 342 | b_regex = "%s/*/bridge" % vnic_dir |
384 | 325 | """ | 343 | return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)] |
380 | 326 | b_rgex = vnic_dir + '/*/bridge' | ||
381 | 327 | return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)] | ||
385 | 328 | 344 | ||
386 | 329 | 345 | ||
387 | 330 | def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): | 346 | def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'): |
393 | 331 | """ | 347 | """Return a list of nics comprising a given bridge on the system.""" |
394 | 332 | Return a list of nics comprising a given bridge on the system or [] | 348 | brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge) |
395 | 333 | """ | 349 | return [x.split('/')[-1] for x in glob.glob(brif_regex)] |
391 | 334 | brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge) | ||
392 | 335 | return [x.split('/')[-1] for x in glob.glob(brif_rgex)] | ||
396 | 336 | 350 | ||
397 | 337 | 351 | ||
398 | 338 | def is_bridge_member(nic): | 352 | def is_bridge_member(nic): |
402 | 339 | """ | 353 | """Check if a given nic is a member of a bridge.""" |
400 | 340 | Check if a given nic is a member of a bridge | ||
401 | 341 | """ | ||
403 | 342 | for bridge in get_bridges(): | 354 | for bridge in get_bridges(): |
404 | 343 | if nic in get_bridge_nics(bridge): | 355 | if nic in get_bridge_nics(bridge): |
405 | 344 | return True | 356 | return True |
406 | 357 | |||
407 | 345 | return False | 358 | return False |
408 | 359 | |||
409 | 360 | |||
410 | 361 | def configure_phy_nic_mtu(mng_ip=None): | ||
411 | 362 | """Configure mtu for physical nic.""" | ||
412 | 363 | phy_nic_mtu = config('phy-nic-mtu') | ||
413 | 364 | if phy_nic_mtu >= 1500: | ||
414 | 365 | phy_nic = None | ||
415 | 366 | if mng_ip is None: | ||
416 | 367 | mng_ip = unit_get('private-address') | ||
417 | 368 | for nic in list_nics(['eth', 'bond', 'br']): | ||
418 | 369 | if mng_ip in get_ipv4_addr(nic, fatal=False): | ||
419 | 370 | phy_nic = nic | ||
420 | 371 | # need to find the associated phy nic for bridge | ||
421 | 372 | if nic.startswith('br'): | ||
422 | 373 | for brnic in get_bridge_nics(nic): | ||
423 | 374 | if brnic.startswith('eth') or brnic.startswith('bond'): | ||
424 | 375 | phy_nic = brnic | ||
425 | 376 | break | ||
426 | 377 | break | ||
427 | 378 | if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic): | ||
428 | 379 | set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True) | ||
429 | 380 | log('set mtu={} for phy_nic={}' | ||
430 | 381 | .format(phy_nic_mtu, phy_nic), level=INFO) | ||
431 | 346 | 382 | ||
432 | === added file 'hooks/charmhelpers/contrib/network/ufw.py' | |||
433 | --- hooks/charmhelpers/contrib/network/ufw.py 1970-01-01 00:00:00 +0000 | |||
434 | +++ hooks/charmhelpers/contrib/network/ufw.py 2014-12-10 07:57:54 +0000 | |||
435 | @@ -0,0 +1,182 @@ | |||
436 | 1 | """ | ||
437 | 2 | This module contains helpers to add and remove ufw rules. | ||
438 | 3 | |||
439 | 4 | Examples: | ||
440 | 5 | |||
441 | 6 | - open SSH port for subnet 10.0.3.0/24: | ||
442 | 7 | |||
443 | 8 | >>> from charmhelpers.contrib.network import ufw | ||
444 | 9 | >>> ufw.enable() | ||
445 | 10 | >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp') | ||
446 | 11 | |||
447 | 12 | - open service by name as defined in /etc/services: | ||
448 | 13 | |||
449 | 14 | >>> from charmhelpers.contrib.network import ufw | ||
450 | 15 | >>> ufw.enable() | ||
451 | 16 | >>> ufw.service('ssh', 'open') | ||
452 | 17 | |||
453 | 18 | - close service by port number: | ||
454 | 19 | |||
455 | 20 | >>> from charmhelpers.contrib.network import ufw | ||
456 | 21 | >>> ufw.enable() | ||
457 | 22 | >>> ufw.service('4949', 'close') # munin | ||
458 | 23 | """ | ||
459 | 24 | |||
460 | 25 | __author__ = "Felipe Reyes <felipe.reyes@canonical.com>" | ||
461 | 26 | |||
462 | 27 | import re | ||
463 | 28 | import subprocess | ||
464 | 29 | from charmhelpers.core import hookenv | ||
465 | 30 | |||
466 | 31 | |||
467 | 32 | def is_enabled(): | ||
468 | 33 | """ | ||
469 | 34 | Check if `ufw` is enabled | ||
470 | 35 | |||
471 | 36 | :returns: True if ufw is enabled | ||
472 | 37 | """ | ||
473 | 38 | output = subprocess.check_output(['ufw', 'status'], env={'LANG': 'en_US'}) | ||
474 | 39 | |||
475 | 40 | m = re.findall(r'^Status: active\n', output, re.M) | ||
476 | 41 | |||
477 | 42 | return len(m) >= 1 | ||
478 | 43 | |||
479 | 44 | |||
480 | 45 | def enable(): | ||
481 | 46 | """ | ||
482 | 47 | Enable ufw | ||
483 | 48 | |||
484 | 49 | :returns: True if ufw is successfully enabled | ||
485 | 50 | """ | ||
486 | 51 | if is_enabled(): | ||
487 | 52 | return True | ||
488 | 53 | |||
489 | 54 | output = subprocess.check_output(['ufw', 'enable'], env={'LANG': 'en_US'}) | ||
490 | 55 | |||
491 | 56 | m = re.findall('^Firewall is active and enabled on system startup\n', | ||
492 | 57 | output, re.M) | ||
493 | 58 | hookenv.log(output, level='DEBUG') | ||
494 | 59 | |||
495 | 60 | if len(m) == 0: | ||
496 | 61 | hookenv.log("ufw couldn't be enabled", level='WARN') | ||
497 | 62 | return False | ||
498 | 63 | else: | ||
499 | 64 | hookenv.log("ufw enabled", level='INFO') | ||
500 | 65 | return True | ||
501 | 66 | |||
502 | 67 | |||
503 | 68 | def disable(): | ||
504 | 69 | """ | ||
505 | 70 | Disable ufw | ||
506 | 71 | |||
507 | 72 | :returns: True if ufw is successfully disabled | ||
508 | 73 | """ | ||
509 | 74 | if not is_enabled(): | ||
510 | 75 | return True | ||
511 | 76 | |||
512 | 77 | output = subprocess.check_output(['ufw', 'disable'], env={'LANG': 'en_US'}) | ||
513 | 78 | |||
514 | 79 | m = re.findall(r'^Firewall stopped and disabled on system startup\n', | ||
515 | 80 | output, re.M) | ||
516 | 81 | hookenv.log(output, level='DEBUG') | ||
517 | 82 | |||
518 | 83 | if len(m) == 0: | ||
519 | 84 | hookenv.log("ufw couldn't be disabled", level='WARN') | ||
520 | 85 | return False | ||
521 | 86 | else: | ||
522 | 87 | hookenv.log("ufw disabled", level='INFO') | ||
523 | 88 | return True | ||
524 | 89 | |||
525 | 90 | |||
526 | 91 | def modify_access(src, dst='any', port=None, proto=None, action='allow'): | ||
527 | 92 | """ | ||
528 | 93 | Grant access to an address or subnet | ||
529 | 94 | |||
530 | 95 | :param src: address (e.g. 192.168.1.234) or subnet | ||
531 | 96 | (e.g. 192.168.1.0/24). | ||
532 | 97 | :param dst: destiny of the connection, if the machine has multiple IPs and | ||
533 | 98 | connections to only one of those have to accepted this is the | ||
534 | 99 | field has to be set. | ||
535 | 100 | :param port: destiny port | ||
536 | 101 | :param proto: protocol (tcp or udp) | ||
537 | 102 | :param action: `allow` or `delete` | ||
538 | 103 | """ | ||
539 | 104 | if not is_enabled(): | ||
540 | 105 | hookenv.log('ufw is disabled, skipping modify_access()', level='WARN') | ||
541 | 106 | return | ||
542 | 107 | |||
543 | 108 | if action == 'delete': | ||
544 | 109 | cmd = ['ufw', 'delete', 'allow'] | ||
545 | 110 | else: | ||
546 | 111 | cmd = ['ufw', action] | ||
547 | 112 | |||
548 | 113 | if src is not None: | ||
549 | 114 | cmd += ['from', src] | ||
550 | 115 | |||
551 | 116 | if dst is not None: | ||
552 | 117 | cmd += ['to', dst] | ||
553 | 118 | |||
554 | 119 | if port is not None: | ||
555 | 120 | cmd += ['port', port] | ||
556 | 121 | |||
557 | 122 | if proto is not None: | ||
558 | 123 | cmd += ['proto', proto] | ||
559 | 124 | |||
560 | 125 | hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG') | ||
561 | 126 | p = subprocess.Popen(cmd, stdout=subprocess.PIPE) | ||
562 | 127 | (stdout, stderr) = p.communicate() | ||
563 | 128 | |||
564 | 129 | hookenv.log(stdout, level='INFO') | ||
565 | 130 | |||
566 | 131 | if p.returncode != 0: | ||
567 | 132 | hookenv.log(stderr, level='ERROR') | ||
568 | 133 | hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd), | ||
569 | 134 | p.returncode), | ||
570 | 135 | level='ERROR') | ||
571 | 136 | |||
572 | 137 | |||
573 | 138 | def grant_access(src, dst='any', port=None, proto=None): | ||
574 | 139 | """ | ||
575 | 140 | Grant access to an address or subnet | ||
576 | 141 | |||
577 | 142 | :param src: address (e.g. 192.168.1.234) or subnet | ||
578 | 143 | (e.g. 192.168.1.0/24). | ||
579 | 144 | :param dst: destiny of the connection, if the machine has multiple IPs and | ||
580 | 145 | connections to only one of those have to accepted this is the | ||
581 | 146 | field has to be set. | ||
582 | 147 | :param port: destiny port | ||
583 | 148 | :param proto: protocol (tcp or udp) | ||
584 | 149 | """ | ||
585 | 150 | return modify_access(src, dst=dst, port=port, proto=proto, action='allow') | ||
586 | 151 | |||
587 | 152 | |||
588 | 153 | def revoke_access(src, dst='any', port=None, proto=None): | ||
589 | 154 | """ | ||
590 | 155 | Revoke access to an address or subnet | ||
591 | 156 | |||
592 | 157 | :param src: address (e.g. 192.168.1.234) or subnet | ||
593 | 158 | (e.g. 192.168.1.0/24). | ||
594 | 159 | :param dst: destiny of the connection, if the machine has multiple IPs and | ||
595 | 160 | connections to only one of those have to accepted this is the | ||
596 | 161 | field has to be set. | ||
597 | 162 | :param port: destiny port | ||
598 | 163 | :param proto: protocol (tcp or udp) | ||
599 | 164 | """ | ||
600 | 165 | return modify_access(src, dst=dst, port=port, proto=proto, action='delete') | ||
601 | 166 | |||
602 | 167 | |||
603 | 168 | def service(name, action): | ||
604 | 169 | """ | ||
605 | 170 | Open/close access to a service | ||
606 | 171 | |||
607 | 172 | :param name: could be a service name defined in `/etc/services` or a port | ||
608 | 173 | number. | ||
609 | 174 | :param action: `open` or `close` | ||
610 | 175 | """ | ||
611 | 176 | if action == 'open': | ||
612 | 177 | subprocess.check_output(['ufw', 'allow', name]) | ||
613 | 178 | elif action == 'close': | ||
614 | 179 | subprocess.check_output(['ufw', 'delete', 'allow', name]) | ||
615 | 180 | else: | ||
616 | 181 | raise Exception(("'{}' not supported, use 'allow' " | ||
617 | 182 | "or 'delete'").format(action)) | ||
618 | 0 | 183 | ||
619 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py' | |||
620 | --- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:57:43 +0000 | |||
621 | +++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-10 07:57:54 +0000 | |||
622 | @@ -1,3 +1,4 @@ | |||
623 | 1 | import six | ||
624 | 1 | from charmhelpers.contrib.amulet.deployment import ( | 2 | from charmhelpers.contrib.amulet.deployment import ( |
625 | 2 | AmuletDeployment | 3 | AmuletDeployment |
626 | 3 | ) | 4 | ) |
627 | @@ -69,7 +70,7 @@ | |||
628 | 69 | 70 | ||
629 | 70 | def _configure_services(self, configs): | 71 | def _configure_services(self, configs): |
630 | 71 | """Configure all of the services.""" | 72 | """Configure all of the services.""" |
632 | 72 | for service, config in configs.iteritems(): | 73 | for service, config in six.iteritems(configs): |
633 | 73 | self.d.configure(service, config) | 74 | self.d.configure(service, config) |
634 | 74 | 75 | ||
635 | 75 | def _get_openstack_release(self): | 76 | def _get_openstack_release(self): |
636 | 76 | 77 | ||
637 | === modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py' | |||
638 | --- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:57:43 +0000 | |||
639 | +++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-10 07:57:54 +0000 | |||
640 | @@ -7,6 +7,8 @@ | |||
641 | 7 | import keystoneclient.v2_0 as keystone_client | 7 | import keystoneclient.v2_0 as keystone_client |
642 | 8 | import novaclient.v1_1.client as nova_client | 8 | import novaclient.v1_1.client as nova_client |
643 | 9 | 9 | ||
644 | 10 | import six | ||
645 | 11 | |||
646 | 10 | from charmhelpers.contrib.amulet.utils import ( | 12 | from charmhelpers.contrib.amulet.utils import ( |
647 | 11 | AmuletUtils | 13 | AmuletUtils |
648 | 12 | ) | 14 | ) |
649 | @@ -60,7 +62,7 @@ | |||
650 | 60 | expected service catalog endpoints. | 62 | expected service catalog endpoints. |
651 | 61 | """ | 63 | """ |
652 | 62 | self.log.debug('actual: {}'.format(repr(actual))) | 64 | self.log.debug('actual: {}'.format(repr(actual))) |
654 | 63 | for k, v in expected.iteritems(): | 65 | for k, v in six.iteritems(expected): |
655 | 64 | if k in actual: | 66 | if k in actual: |
656 | 65 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) | 67 | ret = self._validate_dict_data(expected[k][0], actual[k][0]) |
657 | 66 | if ret: | 68 | if ret: |
658 | 67 | 69 | ||
659 | === modified file 'hooks/charmhelpers/contrib/openstack/context.py' | |||
660 | --- hooks/charmhelpers/contrib/openstack/context.py 2014-10-06 21:57:43 +0000 | |||
661 | +++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-10 07:57:54 +0000 | |||
662 | @@ -1,20 +1,18 @@ | |||
663 | 1 | import json | 1 | import json |
664 | 2 | import os | 2 | import os |
665 | 3 | import time | 3 | import time |
666 | 4 | |||
667 | 5 | from base64 import b64decode | 4 | from base64 import b64decode |
668 | 5 | from subprocess import check_call | ||
669 | 6 | 6 | ||
673 | 7 | from subprocess import ( | 7 | import six |
671 | 8 | check_call | ||
672 | 9 | ) | ||
674 | 10 | 8 | ||
675 | 11 | from charmhelpers.fetch import ( | 9 | from charmhelpers.fetch import ( |
676 | 12 | apt_install, | 10 | apt_install, |
677 | 13 | filter_installed_packages, | 11 | filter_installed_packages, |
678 | 14 | ) | 12 | ) |
679 | 15 | |||
680 | 16 | from charmhelpers.core.hookenv import ( | 13 | from charmhelpers.core.hookenv import ( |
681 | 17 | config, | 14 | config, |
682 | 15 | is_relation_made, | ||
683 | 18 | local_unit, | 16 | local_unit, |
684 | 19 | log, | 17 | log, |
685 | 20 | relation_get, | 18 | relation_get, |
686 | @@ -23,41 +21,40 @@ | |||
687 | 23 | relation_set, | 21 | relation_set, |
688 | 24 | unit_get, | 22 | unit_get, |
689 | 25 | unit_private_ip, | 23 | unit_private_ip, |
690 | 24 | DEBUG, | ||
691 | 25 | INFO, | ||
692 | 26 | WARNING, | ||
693 | 26 | ERROR, | 27 | ERROR, |
694 | 27 | INFO | ||
695 | 28 | ) | 28 | ) |
696 | 29 | |||
697 | 30 | from charmhelpers.core.host import ( | 29 | from charmhelpers.core.host import ( |
698 | 31 | mkdir, | 30 | mkdir, |
700 | 32 | write_file | 31 | write_file, |
701 | 33 | ) | 32 | ) |
702 | 34 | |||
703 | 35 | from charmhelpers.contrib.hahelpers.cluster import ( | 33 | from charmhelpers.contrib.hahelpers.cluster import ( |
704 | 36 | determine_apache_port, | 34 | determine_apache_port, |
705 | 37 | determine_api_port, | 35 | determine_api_port, |
706 | 38 | https, | 36 | https, |
708 | 39 | is_clustered | 37 | is_clustered, |
709 | 40 | ) | 38 | ) |
710 | 41 | |||
711 | 42 | from charmhelpers.contrib.hahelpers.apache import ( | 39 | from charmhelpers.contrib.hahelpers.apache import ( |
712 | 43 | get_cert, | 40 | get_cert, |
713 | 44 | get_ca_cert, | 41 | get_ca_cert, |
714 | 45 | install_ca_cert, | 42 | install_ca_cert, |
715 | 46 | ) | 43 | ) |
716 | 47 | |||
717 | 48 | from charmhelpers.contrib.openstack.neutron import ( | 44 | from charmhelpers.contrib.openstack.neutron import ( |
718 | 49 | neutron_plugin_attribute, | 45 | neutron_plugin_attribute, |
719 | 50 | ) | 46 | ) |
720 | 51 | |||
721 | 52 | from charmhelpers.contrib.network.ip import ( | 47 | from charmhelpers.contrib.network.ip import ( |
722 | 53 | get_address_in_network, | 48 | get_address_in_network, |
723 | 54 | get_ipv6_addr, | 49 | get_ipv6_addr, |
724 | 55 | get_netmask_for_address, | 50 | get_netmask_for_address, |
725 | 56 | format_ipv6_addr, | 51 | format_ipv6_addr, |
727 | 57 | is_address_in_network | 52 | is_address_in_network, |
728 | 58 | ) | 53 | ) |
729 | 54 | from charmhelpers.contrib.openstack.utils import get_host_ip | ||
730 | 59 | 55 | ||
731 | 60 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' | 56 | CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt' |
732 | 57 | ADDRESS_TYPES = ['admin', 'internal', 'public'] | ||
733 | 61 | 58 | ||
734 | 62 | 59 | ||
735 | 63 | class OSContextError(Exception): | 60 | class OSContextError(Exception): |
736 | @@ -65,7 +62,7 @@ | |||
737 | 65 | 62 | ||
738 | 66 | 63 | ||
739 | 67 | def ensure_packages(packages): | 64 | def ensure_packages(packages): |
741 | 68 | '''Install but do not upgrade required plugin packages''' | 65 | """Install but do not upgrade required plugin packages.""" |
742 | 69 | required = filter_installed_packages(packages) | 66 | required = filter_installed_packages(packages) |
743 | 70 | if required: | 67 | if required: |
744 | 71 | apt_install(required, fatal=True) | 68 | apt_install(required, fatal=True) |
745 | @@ -73,20 +70,27 @@ | |||
746 | 73 | 70 | ||
747 | 74 | def context_complete(ctxt): | 71 | def context_complete(ctxt): |
748 | 75 | _missing = [] | 72 | _missing = [] |
750 | 76 | for k, v in ctxt.iteritems(): | 73 | for k, v in six.iteritems(ctxt): |
751 | 77 | if v is None or v == '': | 74 | if v is None or v == '': |
752 | 78 | _missing.append(k) | 75 | _missing.append(k) |
753 | 76 | |||
754 | 79 | if _missing: | 77 | if _missing: |
756 | 80 | log('Missing required data: %s' % ' '.join(_missing), level='INFO') | 78 | log('Missing required data: %s' % ' '.join(_missing), level=INFO) |
757 | 81 | return False | 79 | return False |
758 | 80 | |||
759 | 82 | return True | 81 | return True |
760 | 83 | 82 | ||
761 | 84 | 83 | ||
762 | 85 | def config_flags_parser(config_flags): | 84 | def config_flags_parser(config_flags): |
763 | 85 | """Parses config flags string into dict. | ||
764 | 86 | |||
765 | 87 | The provided config_flags string may be a list of comma-separated values | ||
766 | 88 | which themselves may be comma-separated list of values. | ||
767 | 89 | """ | ||
768 | 86 | if config_flags.find('==') >= 0: | 90 | if config_flags.find('==') >= 0: |
771 | 87 | log("config_flags is not in expected format (key=value)", | 91 | log("config_flags is not in expected format (key=value)", level=ERROR) |
770 | 88 | level=ERROR) | ||
772 | 89 | raise OSContextError | 92 | raise OSContextError |
773 | 93 | |||
774 | 90 | # strip the following from each value. | 94 | # strip the following from each value. |
775 | 91 | post_strippers = ' ,' | 95 | post_strippers = ' ,' |
776 | 92 | # we strip any leading/trailing '=' or ' ' from the string then | 96 | # we strip any leading/trailing '=' or ' ' from the string then |
777 | @@ -94,7 +98,7 @@ | |||
778 | 94 | split = config_flags.strip(' =').split('=') | 98 | split = config_flags.strip(' =').split('=') |
779 | 95 | limit = len(split) | 99 | limit = len(split) |
780 | 96 | flags = {} | 100 | flags = {} |
782 | 97 | for i in xrange(0, limit - 1): | 101 | for i in range(0, limit - 1): |
783 | 98 | current = split[i] | 102 | current = split[i] |
784 | 99 | next = split[i + 1] | 103 | next = split[i + 1] |
785 | 100 | vindex = next.rfind(',') | 104 | vindex = next.rfind(',') |
786 | @@ -109,17 +113,18 @@ | |||
787 | 109 | # if this not the first entry, expect an embedded key. | 113 | # if this not the first entry, expect an embedded key. |
788 | 110 | index = current.rfind(',') | 114 | index = current.rfind(',') |
789 | 111 | if index < 0: | 115 | if index < 0: |
792 | 112 | log("invalid config value(s) at index %s" % (i), | 116 | log("Invalid config value(s) at index %s" % (i), level=ERROR) |
791 | 113 | level=ERROR) | ||
793 | 114 | raise OSContextError | 117 | raise OSContextError |
794 | 115 | key = current[index + 1:] | 118 | key = current[index + 1:] |
795 | 116 | 119 | ||
796 | 117 | # Add to collection. | 120 | # Add to collection. |
797 | 118 | flags[key.strip(post_strippers)] = value.rstrip(post_strippers) | 121 | flags[key.strip(post_strippers)] = value.rstrip(post_strippers) |
798 | 122 | |||
799 | 119 | return flags | 123 | return flags |
800 | 120 | 124 | ||
801 | 121 | 125 | ||
802 | 122 | class OSContextGenerator(object): | 126 | class OSContextGenerator(object): |
803 | 127 | """Base class for all context generators.""" | ||
804 | 123 | interfaces = [] | 128 | interfaces = [] |
805 | 124 | 129 | ||
806 | 125 | def __call__(self): | 130 | def __call__(self): |
807 | @@ -131,11 +136,11 @@ | |||
808 | 131 | 136 | ||
809 | 132 | def __init__(self, | 137 | def __init__(self, |
810 | 133 | database=None, user=None, relation_prefix=None, ssl_dir=None): | 138 | database=None, user=None, relation_prefix=None, ssl_dir=None): |
816 | 134 | ''' | 139 | """Allows inspecting relation for settings prefixed with |
817 | 135 | Allows inspecting relation for settings prefixed with relation_prefix. | 140 | relation_prefix. This is useful for parsing access for multiple |
818 | 136 | This is useful for parsing access for multiple databases returned via | 141 | databases returned via the shared-db interface (eg, nova_password, |
819 | 137 | the shared-db interface (eg, nova_password, quantum_password) | 142 | quantum_password) |
820 | 138 | ''' | 143 | """ |
821 | 139 | self.relation_prefix = relation_prefix | 144 | self.relation_prefix = relation_prefix |
822 | 140 | self.database = database | 145 | self.database = database |
823 | 141 | self.user = user | 146 | self.user = user |
824 | @@ -145,9 +150,8 @@ | |||
825 | 145 | self.database = self.database or config('database') | 150 | self.database = self.database or config('database') |
826 | 146 | self.user = self.user or config('database-user') | 151 | self.user = self.user or config('database-user') |
827 | 147 | if None in [self.database, self.user]: | 152 | if None in [self.database, self.user]: |
831 | 148 | log('Could not generate shared_db context. ' | 153 | log("Could not generate shared_db context. Missing required charm " |
832 | 149 | 'Missing required charm config options. ' | 154 | "config options. (database name and user)", level=ERROR) |
830 | 150 | '(database name and user)') | ||
833 | 151 | raise OSContextError | 155 | raise OSContextError |
834 | 152 | 156 | ||
835 | 153 | ctxt = {} | 157 | ctxt = {} |
836 | @@ -200,23 +204,24 @@ | |||
837 | 200 | def __call__(self): | 204 | def __call__(self): |
838 | 201 | self.database = self.database or config('database') | 205 | self.database = self.database or config('database') |
839 | 202 | if self.database is None: | 206 | if self.database is None: |
843 | 203 | log('Could not generate postgresql_db context. ' | 207 | log('Could not generate postgresql_db context. Missing required ' |
844 | 204 | 'Missing required charm config options. ' | 208 | 'charm config options. (database name)', level=ERROR) |
842 | 205 | '(database name)') | ||
845 | 206 | raise OSContextError | 209 | raise OSContextError |
846 | 210 | |||
847 | 207 | ctxt = {} | 211 | ctxt = {} |
848 | 208 | |||
849 | 209 | for rid in relation_ids(self.interfaces[0]): | 212 | for rid in relation_ids(self.interfaces[0]): |
850 | 210 | for unit in related_units(rid): | 213 | for unit in related_units(rid): |
858 | 211 | ctxt = { | 214 | rel_host = relation_get('host', rid=rid, unit=unit) |
859 | 212 | 'database_host': relation_get('host', rid=rid, unit=unit), | 215 | rel_user = relation_get('user', rid=rid, unit=unit) |
860 | 213 | 'database': self.database, | 216 | rel_passwd = relation_get('password', rid=rid, unit=unit) |
861 | 214 | 'database_user': relation_get('user', rid=rid, unit=unit), | 217 | ctxt = {'database_host': rel_host, |
862 | 215 | 'database_password': relation_get('password', rid=rid, unit=unit), | 218 | 'database': self.database, |
863 | 216 | 'database_type': 'postgresql', | 219 | 'database_user': rel_user, |
864 | 217 | } | 220 | 'database_password': rel_passwd, |
865 | 221 | 'database_type': 'postgresql'} | ||
866 | 218 | if context_complete(ctxt): | 222 | if context_complete(ctxt): |
867 | 219 | return ctxt | 223 | return ctxt |
868 | 224 | |||
869 | 220 | return {} | 225 | return {} |
870 | 221 | 226 | ||
871 | 222 | 227 | ||
872 | @@ -225,23 +230,29 @@ | |||
873 | 225 | ca_path = os.path.join(ssl_dir, 'db-client.ca') | 230 | ca_path = os.path.join(ssl_dir, 'db-client.ca') |
874 | 226 | with open(ca_path, 'w') as fh: | 231 | with open(ca_path, 'w') as fh: |
875 | 227 | fh.write(b64decode(rdata['ssl_ca'])) | 232 | fh.write(b64decode(rdata['ssl_ca'])) |
876 | 233 | |||
877 | 228 | ctxt['database_ssl_ca'] = ca_path | 234 | ctxt['database_ssl_ca'] = ca_path |
878 | 229 | elif 'ssl_ca' in rdata: | 235 | elif 'ssl_ca' in rdata: |
880 | 230 | log("Charm not setup for ssl support but ssl ca found") | 236 | log("Charm not setup for ssl support but ssl ca found", level=INFO) |
881 | 231 | return ctxt | 237 | return ctxt |
882 | 238 | |||
883 | 232 | if 'ssl_cert' in rdata: | 239 | if 'ssl_cert' in rdata: |
884 | 233 | cert_path = os.path.join( | 240 | cert_path = os.path.join( |
885 | 234 | ssl_dir, 'db-client.cert') | 241 | ssl_dir, 'db-client.cert') |
886 | 235 | if not os.path.exists(cert_path): | 242 | if not os.path.exists(cert_path): |
888 | 236 | log("Waiting 1m for ssl client cert validity") | 243 | log("Waiting 1m for ssl client cert validity", level=INFO) |
889 | 237 | time.sleep(60) | 244 | time.sleep(60) |
890 | 245 | |||
891 | 238 | with open(cert_path, 'w') as fh: | 246 | with open(cert_path, 'w') as fh: |
892 | 239 | fh.write(b64decode(rdata['ssl_cert'])) | 247 | fh.write(b64decode(rdata['ssl_cert'])) |
893 | 248 | |||
894 | 240 | ctxt['database_ssl_cert'] = cert_path | 249 | ctxt['database_ssl_cert'] = cert_path |
895 | 241 | key_path = os.path.join(ssl_dir, 'db-client.key') | 250 | key_path = os.path.join(ssl_dir, 'db-client.key') |
896 | 242 | with open(key_path, 'w') as fh: | 251 | with open(key_path, 'w') as fh: |
897 | 243 | fh.write(b64decode(rdata['ssl_key'])) | 252 | fh.write(b64decode(rdata['ssl_key'])) |
898 | 253 | |||
899 | 244 | ctxt['database_ssl_key'] = key_path | 254 | ctxt['database_ssl_key'] = key_path |
900 | 255 | |||
901 | 245 | return ctxt | 256 | return ctxt |
902 | 246 | 257 | ||
903 | 247 | 258 | ||
904 | @@ -249,9 +260,8 @@ | |||
905 | 249 | interfaces = ['identity-service'] | 260 | interfaces = ['identity-service'] |
906 | 250 | 261 | ||
907 | 251 | def __call__(self): | 262 | def __call__(self): |
909 | 252 | log('Generating template context for identity-service') | 263 | log('Generating template context for identity-service', level=DEBUG) |
910 | 253 | ctxt = {} | 264 | ctxt = {} |
911 | 254 | |||
912 | 255 | for rid in relation_ids('identity-service'): | 265 | for rid in relation_ids('identity-service'): |
913 | 256 | for unit in related_units(rid): | 266 | for unit in related_units(rid): |
914 | 257 | rdata = relation_get(rid=rid, unit=unit) | 267 | rdata = relation_get(rid=rid, unit=unit) |
915 | @@ -259,26 +269,24 @@ | |||
916 | 259 | serv_host = format_ipv6_addr(serv_host) or serv_host | 269 | serv_host = format_ipv6_addr(serv_host) or serv_host |
917 | 260 | auth_host = rdata.get('auth_host') | 270 | auth_host = rdata.get('auth_host') |
918 | 261 | auth_host = format_ipv6_addr(auth_host) or auth_host | 271 | auth_host = format_ipv6_addr(auth_host) or auth_host |
933 | 262 | 272 | svc_protocol = rdata.get('service_protocol') or 'http' | |
934 | 263 | ctxt = { | 273 | auth_protocol = rdata.get('auth_protocol') or 'http' |
935 | 264 | 'service_port': rdata.get('service_port'), | 274 | ctxt = {'service_port': rdata.get('service_port'), |
936 | 265 | 'service_host': serv_host, | 275 | 'service_host': serv_host, |
937 | 266 | 'auth_host': auth_host, | 276 | 'auth_host': auth_host, |
938 | 267 | 'auth_port': rdata.get('auth_port'), | 277 | 'auth_port': rdata.get('auth_port'), |
939 | 268 | 'admin_tenant_name': rdata.get('service_tenant'), | 278 | 'admin_tenant_name': rdata.get('service_tenant'), |
940 | 269 | 'admin_user': rdata.get('service_username'), | 279 | 'admin_user': rdata.get('service_username'), |
941 | 270 | 'admin_password': rdata.get('service_password'), | 280 | 'admin_password': rdata.get('service_password'), |
942 | 271 | 'service_protocol': | 281 | 'service_protocol': svc_protocol, |
943 | 272 | rdata.get('service_protocol') or 'http', | 282 | 'auth_protocol': auth_protocol} |
930 | 273 | 'auth_protocol': | ||
931 | 274 | rdata.get('auth_protocol') or 'http', | ||
932 | 275 | } | ||
944 | 276 | if context_complete(ctxt): | 283 | if context_complete(ctxt): |
945 | 277 | # NOTE(jamespage) this is required for >= icehouse | 284 | # NOTE(jamespage) this is required for >= icehouse |
946 | 278 | # so a missing value just indicates keystone needs | 285 | # so a missing value just indicates keystone needs |
947 | 279 | # upgrading | 286 | # upgrading |
948 | 280 | ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') | 287 | ctxt['admin_tenant_id'] = rdata.get('service_tenant_id') |
949 | 281 | return ctxt | 288 | return ctxt |
950 | 289 | |||
951 | 282 | return {} | 290 | return {} |
952 | 283 | 291 | ||
953 | 284 | 292 | ||
954 | @@ -291,21 +299,23 @@ | |||
955 | 291 | self.interfaces = [rel_name] | 299 | self.interfaces = [rel_name] |
956 | 292 | 300 | ||
957 | 293 | def __call__(self): | 301 | def __call__(self): |
959 | 294 | log('Generating template context for amqp') | 302 | log('Generating template context for amqp', level=DEBUG) |
960 | 295 | conf = config() | 303 | conf = config() |
961 | 296 | user_setting = 'rabbit-user' | ||
962 | 297 | vhost_setting = 'rabbit-vhost' | ||
963 | 298 | if self.relation_prefix: | 304 | if self.relation_prefix: |
966 | 299 | user_setting = self.relation_prefix + '-rabbit-user' | 305 | user_setting = '%s-rabbit-user' % (self.relation_prefix) |
967 | 300 | vhost_setting = self.relation_prefix + '-rabbit-vhost' | 306 | vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix) |
968 | 307 | else: | ||
969 | 308 | user_setting = 'rabbit-user' | ||
970 | 309 | vhost_setting = 'rabbit-vhost' | ||
971 | 301 | 310 | ||
972 | 302 | try: | 311 | try: |
973 | 303 | username = conf[user_setting] | 312 | username = conf[user_setting] |
974 | 304 | vhost = conf[vhost_setting] | 313 | vhost = conf[vhost_setting] |
975 | 305 | except KeyError as e: | 314 | except KeyError as e: |
978 | 306 | log('Could not generate shared_db context. ' | 315 | log('Could not generate shared_db context. Missing required charm ' |
979 | 307 | 'Missing required charm config options: %s.' % e) | 316 | 'config options: %s.' % e, level=ERROR) |
980 | 308 | raise OSContextError | 317 | raise OSContextError |
981 | 318 | |||
982 | 309 | ctxt = {} | 319 | ctxt = {} |
983 | 310 | for rid in relation_ids(self.rel_name): | 320 | for rid in relation_ids(self.rel_name): |
984 | 311 | ha_vip_only = False | 321 | ha_vip_only = False |
985 | @@ -319,6 +329,7 @@ | |||
986 | 319 | host = relation_get('private-address', rid=rid, unit=unit) | 329 | host = relation_get('private-address', rid=rid, unit=unit) |
987 | 320 | host = format_ipv6_addr(host) or host | 330 | host = format_ipv6_addr(host) or host |
988 | 321 | ctxt['rabbitmq_host'] = host | 331 | ctxt['rabbitmq_host'] = host |
989 | 332 | |||
990 | 322 | ctxt.update({ | 333 | ctxt.update({ |
991 | 323 | 'rabbitmq_user': username, | 334 | 'rabbitmq_user': username, |
992 | 324 | 'rabbitmq_password': relation_get('password', rid=rid, | 335 | 'rabbitmq_password': relation_get('password', rid=rid, |
993 | @@ -329,6 +340,7 @@ | |||
994 | 329 | ssl_port = relation_get('ssl_port', rid=rid, unit=unit) | 340 | ssl_port = relation_get('ssl_port', rid=rid, unit=unit) |
995 | 330 | if ssl_port: | 341 | if ssl_port: |
996 | 331 | ctxt['rabbit_ssl_port'] = ssl_port | 342 | ctxt['rabbit_ssl_port'] = ssl_port |
997 | 343 | |||
998 | 332 | ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) | 344 | ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit) |
999 | 333 | if ssl_ca: | 345 | if ssl_ca: |
1000 | 334 | ctxt['rabbit_ssl_ca'] = ssl_ca | 346 | ctxt['rabbit_ssl_ca'] = ssl_ca |
1001 | @@ -342,41 +354,45 @@ | |||
1002 | 342 | if context_complete(ctxt): | 354 | if context_complete(ctxt): |
1003 | 343 | if 'rabbit_ssl_ca' in ctxt: | 355 | if 'rabbit_ssl_ca' in ctxt: |
1004 | 344 | if not self.ssl_dir: | 356 | if not self.ssl_dir: |
1007 | 345 | log(("Charm not setup for ssl support " | 357 | log("Charm not setup for ssl support but ssl ca " |
1008 | 346 | "but ssl ca found")) | 358 | "found", level=INFO) |
1009 | 347 | break | 359 | break |
1010 | 360 | |||
1011 | 348 | ca_path = os.path.join( | 361 | ca_path = os.path.join( |
1012 | 349 | self.ssl_dir, 'rabbit-client-ca.pem') | 362 | self.ssl_dir, 'rabbit-client-ca.pem') |
1013 | 350 | with open(ca_path, 'w') as fh: | 363 | with open(ca_path, 'w') as fh: |
1014 | 351 | fh.write(b64decode(ctxt['rabbit_ssl_ca'])) | 364 | fh.write(b64decode(ctxt['rabbit_ssl_ca'])) |
1015 | 352 | ctxt['rabbit_ssl_ca'] = ca_path | 365 | ctxt['rabbit_ssl_ca'] = ca_path |
1016 | 366 | |||
1017 | 353 | # Sufficient information found = break out! | 367 | # Sufficient information found = break out! |
1018 | 354 | break | 368 | break |
1019 | 369 | |||
1020 | 355 | # Used for active/active rabbitmq >= grizzly | 370 | # Used for active/active rabbitmq >= grizzly |
1023 | 356 | if ('clustered' not in ctxt or ha_vip_only) \ | 371 | if (('clustered' not in ctxt or ha_vip_only) and |
1024 | 357 | and len(related_units(rid)) > 1: | 372 | len(related_units(rid)) > 1): |
1025 | 358 | rabbitmq_hosts = [] | 373 | rabbitmq_hosts = [] |
1026 | 359 | for unit in related_units(rid): | 374 | for unit in related_units(rid): |
1027 | 360 | host = relation_get('private-address', rid=rid, unit=unit) | 375 | host = relation_get('private-address', rid=rid, unit=unit) |
1028 | 361 | host = format_ipv6_addr(host) or host | 376 | host = format_ipv6_addr(host) or host |
1029 | 362 | rabbitmq_hosts.append(host) | 377 | rabbitmq_hosts.append(host) |
1031 | 363 | ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts) | 378 | |
1032 | 379 | ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts)) | ||
1033 | 380 | |||
1034 | 364 | if not context_complete(ctxt): | 381 | if not context_complete(ctxt): |
1035 | 365 | return {} | 382 | return {} |
1038 | 366 | else: | 383 | |
1039 | 367 | return ctxt | 384 | return ctxt |
1040 | 368 | 385 | ||
1041 | 369 | 386 | ||
1042 | 370 | class CephContext(OSContextGenerator): | 387 | class CephContext(OSContextGenerator): |
1043 | 388 | """Generates context for /etc/ceph/ceph.conf templates.""" | ||
1044 | 371 | interfaces = ['ceph'] | 389 | interfaces = ['ceph'] |
1045 | 372 | 390 | ||
1046 | 373 | def __call__(self): | 391 | def __call__(self): |
1047 | 374 | '''This generates context for /etc/ceph/ceph.conf templates''' | ||
1048 | 375 | if not relation_ids('ceph'): | 392 | if not relation_ids('ceph'): |
1049 | 376 | return {} | 393 | return {} |
1050 | 377 | 394 | ||
1053 | 378 | log('Generating template context for ceph') | 395 | log('Generating template context for ceph', level=DEBUG) |
1052 | 379 | |||
1054 | 380 | mon_hosts = [] | 396 | mon_hosts = [] |
1055 | 381 | auth = None | 397 | auth = None |
1056 | 382 | key = None | 398 | key = None |
1057 | @@ -385,18 +401,18 @@ | |||
1058 | 385 | for unit in related_units(rid): | 401 | for unit in related_units(rid): |
1059 | 386 | auth = relation_get('auth', rid=rid, unit=unit) | 402 | auth = relation_get('auth', rid=rid, unit=unit) |
1060 | 387 | key = relation_get('key', rid=rid, unit=unit) | 403 | key = relation_get('key', rid=rid, unit=unit) |
1064 | 388 | ceph_addr = \ | 404 | ceph_pub_addr = relation_get('ceph-public-address', rid=rid, |
1065 | 389 | relation_get('ceph-public-address', rid=rid, unit=unit) or \ | 405 | unit=unit) |
1066 | 390 | relation_get('private-address', rid=rid, unit=unit) | 406 | unit_priv_addr = relation_get('private-address', rid=rid, |
1067 | 407 | unit=unit) | ||
1068 | 408 | ceph_addr = ceph_pub_addr or unit_priv_addr | ||
1069 | 391 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr | 409 | ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr |
1070 | 392 | mon_hosts.append(ceph_addr) | 410 | mon_hosts.append(ceph_addr) |
1071 | 393 | 411 | ||
1078 | 394 | ctxt = { | 412 | ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)), |
1079 | 395 | 'mon_hosts': ' '.join(mon_hosts), | 413 | 'auth': auth, |
1080 | 396 | 'auth': auth, | 414 | 'key': key, |
1081 | 397 | 'key': key, | 415 | 'use_syslog': use_syslog} |
1076 | 398 | 'use_syslog': use_syslog | ||
1077 | 399 | } | ||
1082 | 400 | 416 | ||
1083 | 401 | if not os.path.isdir('/etc/ceph'): | 417 | if not os.path.isdir('/etc/ceph'): |
1084 | 402 | os.mkdir('/etc/ceph') | 418 | os.mkdir('/etc/ceph') |
1085 | @@ -405,79 +421,68 @@ | |||
1086 | 405 | return {} | 421 | return {} |
1087 | 406 | 422 | ||
1088 | 407 | ensure_packages(['ceph-common']) | 423 | ensure_packages(['ceph-common']) |
1089 | 408 | |||
1090 | 409 | return ctxt | 424 | return ctxt |
1091 | 410 | 425 | ||
1092 | 411 | 426 | ||
1093 | 412 | ADDRESS_TYPES = ['admin', 'internal', 'public'] | ||
1094 | 413 | |||
1095 | 414 | |||
1096 | 415 | class HAProxyContext(OSContextGenerator): | 427 | class HAProxyContext(OSContextGenerator): |
1097 | 428 | """Provides half a context for the haproxy template, which describes | ||
1098 | 429 | all peers to be included in the cluster. Each charm needs to include | ||
1099 | 430 | its own context generator that describes the port mapping. | ||
1100 | 431 | """ | ||
1101 | 416 | interfaces = ['cluster'] | 432 | interfaces = ['cluster'] |
1102 | 417 | 433 | ||
1103 | 434 | def __init__(self, singlenode_mode=False): | ||
1104 | 435 | self.singlenode_mode = singlenode_mode | ||
1105 | 436 | |||
1106 | 418 | def __call__(self): | 437 | def __call__(self): |
1113 | 419 | ''' | 438 | if not relation_ids('cluster') and not self.singlenode_mode: |
1108 | 420 | Builds half a context for the haproxy template, which describes | ||
1109 | 421 | all peers to be included in the cluster. Each charm needs to include | ||
1110 | 422 | its own context generator that describes the port mapping. | ||
1111 | 423 | ''' | ||
1112 | 424 | if not relation_ids('cluster'): | ||
1114 | 425 | return {} | 439 | return {} |
1115 | 426 | 440 | ||
1116 | 441 | if config('prefer-ipv6'): | ||
1117 | 442 | addr = get_ipv6_addr(exc_list=[config('vip')])[0] | ||
1118 | 443 | else: | ||
1119 | 444 | addr = get_host_ip(unit_get('private-address')) | ||
1120 | 445 | |||
1121 | 427 | l_unit = local_unit().replace('/', '-') | 446 | l_unit = local_unit().replace('/', '-') |
1122 | 428 | |||
1123 | 429 | if config('prefer-ipv6'): | ||
1124 | 430 | addr = get_ipv6_addr(exc_list=[config('vip')])[0] | ||
1125 | 431 | else: | ||
1126 | 432 | addr = unit_get('private-address') | ||
1127 | 433 | |||
1128 | 434 | cluster_hosts = {} | 447 | cluster_hosts = {} |
1129 | 435 | 448 | ||
1130 | 436 | # NOTE(jamespage): build out map of configured network endpoints | 449 | # NOTE(jamespage): build out map of configured network endpoints |
1131 | 437 | # and associated backends | 450 | # and associated backends |
1132 | 438 | for addr_type in ADDRESS_TYPES: | 451 | for addr_type in ADDRESS_TYPES: |
1135 | 439 | laddr = get_address_in_network( | 452 | cfg_opt = 'os-{}-network'.format(addr_type) |
1136 | 440 | config('os-{}-network'.format(addr_type))) | 453 | laddr = get_address_in_network(config(cfg_opt)) |
1137 | 441 | if laddr: | 454 | if laddr: |
1145 | 442 | cluster_hosts[laddr] = {} | 455 | netmask = get_netmask_for_address(laddr) |
1146 | 443 | cluster_hosts[laddr]['network'] = "{}/{}".format( | 456 | cluster_hosts[laddr] = {'network': "{}/{}".format(laddr, |
1147 | 444 | laddr, | 457 | netmask), |
1148 | 445 | get_netmask_for_address(laddr) | 458 | 'backends': {l_unit: laddr}} |
1142 | 446 | ) | ||
1143 | 447 | cluster_hosts[laddr]['backends'] = {} | ||
1144 | 448 | cluster_hosts[laddr]['backends'][l_unit] = laddr | ||
1149 | 449 | for rid in relation_ids('cluster'): | 459 | for rid in relation_ids('cluster'): |
1150 | 450 | for unit in related_units(rid): | 460 | for unit in related_units(rid): |
1151 | 451 | _unit = unit.replace('/', '-') | ||
1152 | 452 | _laddr = relation_get('{}-address'.format(addr_type), | 461 | _laddr = relation_get('{}-address'.format(addr_type), |
1153 | 453 | rid=rid, unit=unit) | 462 | rid=rid, unit=unit) |
1154 | 454 | if _laddr: | 463 | if _laddr: |
1155 | 464 | _unit = unit.replace('/', '-') | ||
1156 | 455 | cluster_hosts[laddr]['backends'][_unit] = _laddr | 465 | cluster_hosts[laddr]['backends'][_unit] = _laddr |
1157 | 456 | 466 | ||
1158 | 457 | # NOTE(jamespage) no split configurations found, just use | 467 | # NOTE(jamespage) no split configurations found, just use |
1159 | 458 | # private addresses | 468 | # private addresses |
1160 | 459 | if not cluster_hosts: | 469 | if not cluster_hosts: |
1168 | 460 | cluster_hosts[addr] = {} | 470 | netmask = get_netmask_for_address(addr) |
1169 | 461 | cluster_hosts[addr]['network'] = "{}/{}".format( | 471 | cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask), |
1170 | 462 | addr, | 472 | 'backends': {l_unit: addr}} |
1164 | 463 | get_netmask_for_address(addr) | ||
1165 | 464 | ) | ||
1166 | 465 | cluster_hosts[addr]['backends'] = {} | ||
1167 | 466 | cluster_hosts[addr]['backends'][l_unit] = addr | ||
1171 | 467 | for rid in relation_ids('cluster'): | 473 | for rid in relation_ids('cluster'): |
1172 | 468 | for unit in related_units(rid): | 474 | for unit in related_units(rid): |
1173 | 469 | _unit = unit.replace('/', '-') | ||
1174 | 470 | _laddr = relation_get('private-address', | 475 | _laddr = relation_get('private-address', |
1175 | 471 | rid=rid, unit=unit) | 476 | rid=rid, unit=unit) |
1176 | 472 | if _laddr: | 477 | if _laddr: |
1177 | 478 | _unit = unit.replace('/', '-') | ||
1178 | 473 | cluster_hosts[addr]['backends'][_unit] = _laddr | 479 | cluster_hosts[addr]['backends'][_unit] = _laddr |
1179 | 474 | 480 | ||
1183 | 475 | ctxt = { | 481 | ctxt = {'frontends': cluster_hosts} |
1181 | 476 | 'frontends': cluster_hosts, | ||
1182 | 477 | } | ||
1184 | 478 | 482 | ||
1185 | 479 | if config('haproxy-server-timeout'): | 483 | if config('haproxy-server-timeout'): |
1186 | 480 | ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') | 484 | ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout') |
1187 | 485 | |||
1188 | 481 | if config('haproxy-client-timeout'): | 486 | if config('haproxy-client-timeout'): |
1189 | 482 | ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') | 487 | ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout') |
1190 | 483 | 488 | ||
1191 | @@ -491,13 +496,18 @@ | |||
1192 | 491 | ctxt['stat_port'] = ':8888' | 496 | ctxt['stat_port'] = ':8888' |
1193 | 492 | 497 | ||
1194 | 493 | for frontend in cluster_hosts: | 498 | for frontend in cluster_hosts: |
1196 | 494 | if len(cluster_hosts[frontend]['backends']) > 1: | 499 | if (len(cluster_hosts[frontend]['backends']) > 1 or |
1197 | 500 | self.singlenode_mode): | ||
1198 | 495 | # Enable haproxy when we have enough peers. | 501 | # Enable haproxy when we have enough peers. |
1200 | 496 | log('Ensuring haproxy enabled in /etc/default/haproxy.') | 502 | log('Ensuring haproxy enabled in /etc/default/haproxy.', |
1201 | 503 | level=DEBUG) | ||
1202 | 497 | with open('/etc/default/haproxy', 'w') as out: | 504 | with open('/etc/default/haproxy', 'w') as out: |
1203 | 498 | out.write('ENABLED=1\n') | 505 | out.write('ENABLED=1\n') |
1204 | 506 | |||
1205 | 499 | return ctxt | 507 | return ctxt |
1207 | 500 | log('HAProxy context is incomplete, this unit has no peers.') | 508 | |
1208 | 509 | log('HAProxy context is incomplete, this unit has no peers.', | ||
1209 | 510 | level=INFO) | ||
1210 | 501 | return {} | 511 | return {} |
1211 | 502 | 512 | ||
1212 | 503 | 513 | ||
1213 | @@ -505,29 +515,28 @@ | |||
1214 | 505 | interfaces = ['image-service'] | 515 | interfaces = ['image-service'] |
1215 | 506 | 516 | ||
1216 | 507 | def __call__(self): | 517 | def __call__(self): |
1222 | 508 | ''' | 518 | """Obtains the glance API server from the image-service relation. |
1223 | 509 | Obtains the glance API server from the image-service relation. Useful | 519 | Useful in nova and cinder (currently). |
1224 | 510 | in nova and cinder (currently). | 520 | """ |
1225 | 511 | ''' | 521 | log('Generating template context for image-service.', level=DEBUG) |
1221 | 512 | log('Generating template context for image-service.') | ||
1226 | 513 | rids = relation_ids('image-service') | 522 | rids = relation_ids('image-service') |
1227 | 514 | if not rids: | 523 | if not rids: |
1228 | 515 | return {} | 524 | return {} |
1229 | 525 | |||
1230 | 516 | for rid in rids: | 526 | for rid in rids: |
1231 | 517 | for unit in related_units(rid): | 527 | for unit in related_units(rid): |
1232 | 518 | api_server = relation_get('glance-api-server', | 528 | api_server = relation_get('glance-api-server', |
1233 | 519 | rid=rid, unit=unit) | 529 | rid=rid, unit=unit) |
1234 | 520 | if api_server: | 530 | if api_server: |
1235 | 521 | return {'glance_api_servers': api_server} | 531 | return {'glance_api_servers': api_server} |
1238 | 522 | log('ImageService context is incomplete. ' | 532 | |
1239 | 523 | 'Missing required relation data.') | 533 | log("ImageService context is incomplete. Missing required relation " |
1240 | 534 | "data.", level=INFO) | ||
1241 | 524 | return {} | 535 | return {} |
1242 | 525 | 536 | ||
1243 | 526 | 537 | ||
1244 | 527 | class ApacheSSLContext(OSContextGenerator): | 538 | class ApacheSSLContext(OSContextGenerator): |
1248 | 528 | 539 | """Generates a context for an apache vhost configuration that configures | |
1246 | 529 | """ | ||
1247 | 530 | Generates a context for an apache vhost configuration that configures | ||
1249 | 531 | HTTPS reverse proxying for one or many endpoints. Generated context | 540 | HTTPS reverse proxying for one or many endpoints. Generated context |
1250 | 532 | looks something like:: | 541 | looks something like:: |
1251 | 533 | 542 | ||
1252 | @@ -561,6 +570,7 @@ | |||
1253 | 561 | else: | 570 | else: |
1254 | 562 | cert_filename = 'cert' | 571 | cert_filename = 'cert' |
1255 | 563 | key_filename = 'key' | 572 | key_filename = 'key' |
1256 | 573 | |||
1257 | 564 | write_file(path=os.path.join(ssl_dir, cert_filename), | 574 | write_file(path=os.path.join(ssl_dir, cert_filename), |
1258 | 565 | content=b64decode(cert)) | 575 | content=b64decode(cert)) |
1259 | 566 | write_file(path=os.path.join(ssl_dir, key_filename), | 576 | write_file(path=os.path.join(ssl_dir, key_filename), |
1260 | @@ -572,7 +582,8 @@ | |||
1261 | 572 | install_ca_cert(b64decode(ca_cert)) | 582 | install_ca_cert(b64decode(ca_cert)) |
1262 | 573 | 583 | ||
1263 | 574 | def canonical_names(self): | 584 | def canonical_names(self): |
1265 | 575 | '''Figure out which canonical names clients will access this service''' | 585 | """Figure out which canonical names clients will access this service. |
1266 | 586 | """ | ||
1267 | 576 | cns = [] | 587 | cns = [] |
1268 | 577 | for r_id in relation_ids('identity-service'): | 588 | for r_id in relation_ids('identity-service'): |
1269 | 578 | for unit in related_units(r_id): | 589 | for unit in related_units(r_id): |
1270 | @@ -580,55 +591,80 @@ | |||
1271 | 580 | for k in rdata: | 591 | for k in rdata: |
1272 | 581 | if k.startswith('ssl_key_'): | 592 | if k.startswith('ssl_key_'): |
1273 | 582 | cns.append(k.lstrip('ssl_key_')) | 593 | cns.append(k.lstrip('ssl_key_')) |
1275 | 583 | return list(set(cns)) | 594 | |
1276 | 595 | return sorted(list(set(cns))) | ||
1277 | 596 | |||
1278 | 597 | def get_network_addresses(self): | ||
1279 | 598 | """For each network configured, return corresponding address and vip | ||
1280 | 599 | (if available). | ||
1281 | 600 | |||
1282 | 601 | Returns a list of tuples of the form: | ||
1283 | 602 | |||
1284 | 603 | [(address_in_net_a, vip_in_net_a), | ||
1285 | 604 | (address_in_net_b, vip_in_net_b), | ||
1286 | 605 | ...] | ||
1287 | 606 | |||
1288 | 607 | or, if no vip(s) available: | ||
1289 | 608 | |||
1290 | 609 | [(address_in_net_a, address_in_net_a), | ||
1291 | 610 | (address_in_net_b, address_in_net_b), | ||
1292 | 611 | ...] | ||
1293 | 612 | """ | ||
1294 | 613 | addresses = [] | ||
1295 | 614 | if config('vip'): | ||
1296 | 615 | vips = config('vip').split() | ||
1297 | 616 | else: | ||
1298 | 617 | vips = [] | ||
1299 | 618 | |||
1300 | 619 | for net_type in ['os-internal-network', 'os-admin-network', | ||
1301 | 620 | 'os-public-network']: | ||
1302 | 621 | addr = get_address_in_network(config(net_type), | ||
1303 | 622 | unit_get('private-address')) | ||
1304 | 623 | if len(vips) > 1 and is_clustered(): | ||
1305 | 624 | if not config(net_type): | ||
1306 | 625 | log("Multiple networks configured but net_type " | ||
1307 | 626 | "is None (%s)." % net_type, level=WARNING) | ||
1308 | 627 | continue | ||
1309 | 628 | |||
1310 | 629 | for vip in vips: | ||
1311 | 630 | if is_address_in_network(config(net_type), vip): | ||
1312 | 631 | addresses.append((addr, vip)) | ||
1313 | 632 | break | ||
1314 | 633 | |||
1315 | 634 | elif is_clustered() and config('vip'): | ||
1316 | 635 | addresses.append((addr, config('vip'))) | ||
1317 | 636 | else: | ||
1318 | 637 | addresses.append((addr, addr)) | ||
1319 | 638 | |||
1320 | 639 | return sorted(addresses) | ||
1321 | 584 | 640 | ||
1322 | 585 | def __call__(self): | 641 | def __call__(self): |
1324 | 586 | if isinstance(self.external_ports, basestring): | 642 | if isinstance(self.external_ports, six.string_types): |
1325 | 587 | self.external_ports = [self.external_ports] | 643 | self.external_ports = [self.external_ports] |
1327 | 588 | if (not self.external_ports or not https()): | 644 | |
1328 | 645 | if not self.external_ports or not https(): | ||
1329 | 589 | return {} | 646 | return {} |
1330 | 590 | 647 | ||
1331 | 591 | self.configure_ca() | 648 | self.configure_ca() |
1332 | 592 | self.enable_modules() | 649 | self.enable_modules() |
1333 | 593 | 650 | ||
1339 | 594 | ctxt = { | 651 | ctxt = {'namespace': self.service_namespace, |
1340 | 595 | 'namespace': self.service_namespace, | 652 | 'endpoints': [], |
1341 | 596 | 'endpoints': [], | 653 | 'ext_ports': []} |
1337 | 597 | 'ext_ports': [] | ||
1338 | 598 | } | ||
1342 | 599 | 654 | ||
1343 | 600 | for cn in self.canonical_names(): | 655 | for cn in self.canonical_names(): |
1344 | 601 | self.configure_cert(cn) | 656 | self.configure_cert(cn) |
1345 | 602 | 657 | ||
1368 | 603 | addresses = [] | 658 | addresses = self.get_network_addresses() |
1369 | 604 | vips = [] | 659 | for address, endpoint in sorted(set(addresses)): |
1348 | 605 | if config('vip'): | ||
1349 | 606 | vips = config('vip').split() | ||
1350 | 607 | |||
1351 | 608 | for network_type in ['os-internal-network', | ||
1352 | 609 | 'os-admin-network', | ||
1353 | 610 | 'os-public-network']: | ||
1354 | 611 | address = get_address_in_network(config(network_type), | ||
1355 | 612 | unit_get('private-address')) | ||
1356 | 613 | if len(vips) > 0 and is_clustered(): | ||
1357 | 614 | for vip in vips: | ||
1358 | 615 | if is_address_in_network(config(network_type), | ||
1359 | 616 | vip): | ||
1360 | 617 | addresses.append((address, vip)) | ||
1361 | 618 | break | ||
1362 | 619 | elif is_clustered(): | ||
1363 | 620 | addresses.append((address, config('vip'))) | ||
1364 | 621 | else: | ||
1365 | 622 | addresses.append((address, address)) | ||
1366 | 623 | |||
1367 | 624 | for address, endpoint in set(addresses): | ||
1370 | 625 | for api_port in self.external_ports: | 660 | for api_port in self.external_ports: |
1371 | 626 | ext_port = determine_apache_port(api_port) | 661 | ext_port = determine_apache_port(api_port) |
1372 | 627 | int_port = determine_api_port(api_port) | 662 | int_port = determine_api_port(api_port) |
1373 | 628 | portmap = (address, endpoint, int(ext_port), int(int_port)) | 663 | portmap = (address, endpoint, int(ext_port), int(int_port)) |
1374 | 629 | ctxt['endpoints'].append(portmap) | 664 | ctxt['endpoints'].append(portmap) |
1375 | 630 | ctxt['ext_ports'].append(int(ext_port)) | 665 | ctxt['ext_ports'].append(int(ext_port)) |
1377 | 631 | ctxt['ext_ports'] = list(set(ctxt['ext_ports'])) | 666 | |
1378 | 667 | ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports']))) | ||
1379 | 632 | return ctxt | 668 | return ctxt |
1380 | 633 | 669 | ||
1381 | 634 | 670 | ||
1382 | @@ -645,21 +681,23 @@ | |||
1383 | 645 | 681 | ||
1384 | 646 | @property | 682 | @property |
1385 | 647 | def packages(self): | 683 | def packages(self): |
1388 | 648 | return neutron_plugin_attribute( | 684 | return neutron_plugin_attribute(self.plugin, 'packages', |
1389 | 649 | self.plugin, 'packages', self.network_manager) | 685 | self.network_manager) |
1390 | 650 | 686 | ||
1391 | 651 | @property | 687 | @property |
1392 | 652 | def neutron_security_groups(self): | 688 | def neutron_security_groups(self): |
1393 | 653 | return None | 689 | return None |
1394 | 654 | 690 | ||
1395 | 655 | def _ensure_packages(self): | 691 | def _ensure_packages(self): |
1397 | 656 | [ensure_packages(pkgs) for pkgs in self.packages] | 692 | for pkgs in self.packages: |
1398 | 693 | ensure_packages(pkgs) | ||
1399 | 657 | 694 | ||
1400 | 658 | def _save_flag_file(self): | 695 | def _save_flag_file(self): |
1401 | 659 | if self.network_manager == 'quantum': | 696 | if self.network_manager == 'quantum': |
1402 | 660 | _file = '/etc/nova/quantum_plugin.conf' | 697 | _file = '/etc/nova/quantum_plugin.conf' |
1403 | 661 | else: | 698 | else: |
1404 | 662 | _file = '/etc/nova/neutron_plugin.conf' | 699 | _file = '/etc/nova/neutron_plugin.conf' |
1405 | 700 | |||
1406 | 663 | with open(_file, 'wb') as out: | 701 | with open(_file, 'wb') as out: |
1407 | 664 | out.write(self.plugin + '\n') | 702 | out.write(self.plugin + '\n') |
1408 | 665 | 703 | ||
1409 | @@ -668,13 +706,11 @@ | |||
1410 | 668 | self.network_manager) | 706 | self.network_manager) |
1411 | 669 | config = neutron_plugin_attribute(self.plugin, 'config', | 707 | config = neutron_plugin_attribute(self.plugin, 'config', |
1412 | 670 | self.network_manager) | 708 | self.network_manager) |
1420 | 671 | ovs_ctxt = { | 709 | ovs_ctxt = {'core_plugin': driver, |
1421 | 672 | 'core_plugin': driver, | 710 | 'neutron_plugin': 'ovs', |
1422 | 673 | 'neutron_plugin': 'ovs', | 711 | 'neutron_security_groups': self.neutron_security_groups, |
1423 | 674 | 'neutron_security_groups': self.neutron_security_groups, | 712 | 'local_ip': unit_private_ip(), |
1424 | 675 | 'local_ip': unit_private_ip(), | 713 | 'config': config} |
1418 | 676 | 'config': config | ||
1419 | 677 | } | ||
1425 | 678 | 714 | ||
1426 | 679 | return ovs_ctxt | 715 | return ovs_ctxt |
1427 | 680 | 716 | ||
1428 | @@ -683,13 +719,11 @@ | |||
1429 | 683 | self.network_manager) | 719 | self.network_manager) |
1430 | 684 | config = neutron_plugin_attribute(self.plugin, 'config', | 720 | config = neutron_plugin_attribute(self.plugin, 'config', |
1431 | 685 | self.network_manager) | 721 | self.network_manager) |
1439 | 686 | nvp_ctxt = { | 722 | nvp_ctxt = {'core_plugin': driver, |
1440 | 687 | 'core_plugin': driver, | 723 | 'neutron_plugin': 'nvp', |
1441 | 688 | 'neutron_plugin': 'nvp', | 724 | 'neutron_security_groups': self.neutron_security_groups, |
1442 | 689 | 'neutron_security_groups': self.neutron_security_groups, | 725 | 'local_ip': unit_private_ip(), |
1443 | 690 | 'local_ip': unit_private_ip(), | 726 | 'config': config} |
1437 | 691 | 'config': config | ||
1438 | 692 | } | ||
1444 | 693 | 727 | ||
1445 | 694 | return nvp_ctxt | 728 | return nvp_ctxt |
1446 | 695 | 729 | ||
1447 | @@ -698,35 +732,50 @@ | |||
1448 | 698 | self.network_manager) | 732 | self.network_manager) |
1449 | 699 | n1kv_config = neutron_plugin_attribute(self.plugin, 'config', | 733 | n1kv_config = neutron_plugin_attribute(self.plugin, 'config', |
1450 | 700 | self.network_manager) | 734 | self.network_manager) |
1463 | 701 | n1kv_ctxt = { | 735 | n1kv_user_config_flags = config('n1kv-config-flags') |
1464 | 702 | 'core_plugin': driver, | 736 | restrict_policy_profiles = config('n1kv-restrict-policy-profiles') |
1465 | 703 | 'neutron_plugin': 'n1kv', | 737 | n1kv_ctxt = {'core_plugin': driver, |
1466 | 704 | 'neutron_security_groups': self.neutron_security_groups, | 738 | 'neutron_plugin': 'n1kv', |
1467 | 705 | 'local_ip': unit_private_ip(), | 739 | 'neutron_security_groups': self.neutron_security_groups, |
1468 | 706 | 'config': n1kv_config, | 740 | 'local_ip': unit_private_ip(), |
1469 | 707 | 'vsm_ip': config('n1kv-vsm-ip'), | 741 | 'config': n1kv_config, |
1470 | 708 | 'vsm_username': config('n1kv-vsm-username'), | 742 | 'vsm_ip': config('n1kv-vsm-ip'), |
1471 | 709 | 'vsm_password': config('n1kv-vsm-password'), | 743 | 'vsm_username': config('n1kv-vsm-username'), |
1472 | 710 | 'restrict_policy_profiles': config( | 744 | 'vsm_password': config('n1kv-vsm-password'), |
1473 | 711 | 'n1kv_restrict_policy_profiles'), | 745 | 'restrict_policy_profiles': restrict_policy_profiles} |
1474 | 712 | } | 746 | |
1475 | 747 | if n1kv_user_config_flags: | ||
1476 | 748 | flags = config_flags_parser(n1kv_user_config_flags) | ||
1477 | 749 | n1kv_ctxt['user_config_flags'] = flags | ||
1478 | 713 | 750 | ||
1479 | 714 | return n1kv_ctxt | 751 | return n1kv_ctxt |
1480 | 715 | 752 | ||
1481 | 753 | def calico_ctxt(self): | ||
1482 | 754 | driver = neutron_plugin_attribute(self.plugin, 'driver', | ||
1483 | 755 | self.network_manager) | ||
1484 | 756 | config = neutron_plugin_attribute(self.plugin, 'config', | ||
1485 | 757 | self.network_manager) | ||
1486 | 758 | calico_ctxt = {'core_plugin': driver, | ||
1487 | 759 | 'neutron_plugin': 'Calico', | ||
1488 | 760 | 'neutron_security_groups': self.neutron_security_groups, | ||
1489 | 761 | 'local_ip': unit_private_ip(), | ||
1490 | 762 | 'config': config} | ||
1491 | 763 | |||
1492 | 764 | return calico_ctxt | ||
1493 | 765 | |||
1494 | 716 | def neutron_ctxt(self): | 766 | def neutron_ctxt(self): |
1495 | 717 | if https(): | 767 | if https(): |
1496 | 718 | proto = 'https' | 768 | proto = 'https' |
1497 | 719 | else: | 769 | else: |
1498 | 720 | proto = 'http' | 770 | proto = 'http' |
1499 | 771 | |||
1500 | 721 | if is_clustered(): | 772 | if is_clustered(): |
1501 | 722 | host = config('vip') | 773 | host = config('vip') |
1502 | 723 | else: | 774 | else: |
1503 | 724 | host = unit_get('private-address') | 775 | host = unit_get('private-address') |
1509 | 725 | url = '%s://%s:%s' % (proto, host, '9696') | 776 | |
1510 | 726 | ctxt = { | 777 | ctxt = {'network_manager': self.network_manager, |
1511 | 727 | 'network_manager': self.network_manager, | 778 | 'neutron_url': '%s://%s:%s' % (proto, host, '9696')} |
1507 | 728 | 'neutron_url': url, | ||
1508 | 729 | } | ||
1512 | 730 | return ctxt | 779 | return ctxt |
1513 | 731 | 780 | ||
1514 | 732 | def __call__(self): | 781 | def __call__(self): |
1515 | @@ -746,6 +795,8 @@ | |||
1516 | 746 | ctxt.update(self.nvp_ctxt()) | 795 | ctxt.update(self.nvp_ctxt()) |
1517 | 747 | elif self.plugin == 'n1kv': | 796 | elif self.plugin == 'n1kv': |
1518 | 748 | ctxt.update(self.n1kv_ctxt()) | 797 | ctxt.update(self.n1kv_ctxt()) |
1519 | 798 | elif self.plugin == 'Calico': | ||
1520 | 799 | ctxt.update(self.calico_ctxt()) | ||
1521 | 749 | 800 | ||
1522 | 750 | alchemy_flags = config('neutron-alchemy-flags') | 801 | alchemy_flags = config('neutron-alchemy-flags') |
1523 | 751 | if alchemy_flags: | 802 | if alchemy_flags: |
1524 | @@ -757,23 +808,40 @@ | |||
1525 | 757 | 808 | ||
1526 | 758 | 809 | ||
1527 | 759 | class OSConfigFlagContext(OSContextGenerator): | 810 | class OSConfigFlagContext(OSContextGenerator): |
1532 | 760 | 811 | """Provides support for user-defined config flags. | |
1533 | 761 | """ | 812 | |
1534 | 762 | Responsible for adding user-defined config-flags in charm config to a | 813 | Users can define a comma-seperated list of key=value pairs |
1535 | 763 | template context. | 814 | in the charm configuration and apply them at any point in |
1536 | 815 | any file by using a template flag. | ||
1537 | 816 | |||
1538 | 817 | Sometimes users might want config flags inserted within a | ||
1539 | 818 | specific section so this class allows users to specify the | ||
1540 | 819 | template flag name, allowing for multiple template flags | ||
1541 | 820 | (sections) within the same context. | ||
1542 | 764 | 821 | ||
1543 | 765 | NOTE: the value of config-flags may be a comma-separated list of | 822 | NOTE: the value of config-flags may be a comma-separated list of |
1544 | 766 | key=value pairs and some Openstack config files support | 823 | key=value pairs and some Openstack config files support |
1545 | 767 | comma-separated lists as values. | 824 | comma-separated lists as values. |
1546 | 768 | """ | 825 | """ |
1547 | 769 | 826 | ||
1548 | 827 | def __init__(self, charm_flag='config-flags', | ||
1549 | 828 | template_flag='user_config_flags'): | ||
1550 | 829 | """ | ||
1551 | 830 | :param charm_flag: config flags in charm configuration. | ||
1552 | 831 | :param template_flag: insert point for user-defined flags in template | ||
1553 | 832 | file. | ||
1554 | 833 | """ | ||
1555 | 834 | super(OSConfigFlagContext, self).__init__() | ||
1556 | 835 | self._charm_flag = charm_flag | ||
1557 | 836 | self._template_flag = template_flag | ||
1558 | 837 | |||
1559 | 770 | def __call__(self): | 838 | def __call__(self): |
1561 | 771 | config_flags = config('config-flags') | 839 | config_flags = config(self._charm_flag) |
1562 | 772 | if not config_flags: | 840 | if not config_flags: |
1563 | 773 | return {} | 841 | return {} |
1564 | 774 | 842 | ||
1567 | 775 | flags = config_flags_parser(config_flags) | 843 | return {self._template_flag: |
1568 | 776 | return {'user_config_flags': flags} | 844 | config_flags_parser(config_flags)} |
1569 | 777 | 845 | ||
1570 | 778 | 846 | ||
1571 | 779 | class SubordinateConfigContext(OSContextGenerator): | 847 | class SubordinateConfigContext(OSContextGenerator): |
1572 | @@ -817,7 +885,6 @@ | |||
1573 | 817 | }, | 885 | }, |
1574 | 818 | } | 886 | } |
1575 | 819 | } | 887 | } |
1576 | 820 | |||
1577 | 821 | """ | 888 | """ |
1578 | 822 | 889 | ||
1579 | 823 | def __init__(self, service, config_file, interface): | 890 | def __init__(self, service, config_file, interface): |
1580 | @@ -847,26 +914,28 @@ | |||
1581 | 847 | 914 | ||
1582 | 848 | if self.service not in sub_config: | 915 | if self.service not in sub_config: |
1583 | 849 | log('Found subordinate_config on %s but it contained' | 916 | log('Found subordinate_config on %s but it contained' |
1585 | 850 | 'nothing for %s service' % (rid, self.service)) | 917 | 'nothing for %s service' % (rid, self.service), |
1586 | 918 | level=INFO) | ||
1587 | 851 | continue | 919 | continue |
1588 | 852 | 920 | ||
1589 | 853 | sub_config = sub_config[self.service] | 921 | sub_config = sub_config[self.service] |
1590 | 854 | if self.config_file not in sub_config: | 922 | if self.config_file not in sub_config: |
1591 | 855 | log('Found subordinate_config on %s but it contained' | 923 | log('Found subordinate_config on %s but it contained' |
1593 | 856 | 'nothing for %s' % (rid, self.config_file)) | 924 | 'nothing for %s' % (rid, self.config_file), |
1594 | 925 | level=INFO) | ||
1595 | 857 | continue | 926 | continue |
1596 | 858 | 927 | ||
1597 | 859 | sub_config = sub_config[self.config_file] | 928 | sub_config = sub_config[self.config_file] |
1599 | 860 | for k, v in sub_config.iteritems(): | 929 | for k, v in six.iteritems(sub_config): |
1600 | 861 | if k == 'sections': | 930 | if k == 'sections': |
1603 | 862 | for section, config_dict in v.iteritems(): | 931 | for section, config_dict in six.iteritems(v): |
1604 | 863 | log("adding section '%s'" % (section)) | 932 | log("adding section '%s'" % (section), |
1605 | 933 | level=DEBUG) | ||
1606 | 864 | ctxt[k][section] = config_dict | 934 | ctxt[k][section] = config_dict |
1607 | 865 | else: | 935 | else: |
1608 | 866 | ctxt[k] = v | 936 | ctxt[k] = v |
1609 | 867 | 937 | ||
1612 | 868 | log("%d section(s) found" % (len(ctxt['sections'])), level=INFO) | 938 | log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) |
1611 | 869 | |||
1613 | 870 | return ctxt | 939 | return ctxt |
1614 | 871 | 940 | ||
1615 | 872 | 941 | ||
1616 | @@ -878,15 +947,14 @@ | |||
1617 | 878 | False if config('debug') is None else config('debug') | 947 | False if config('debug') is None else config('debug') |
1618 | 879 | ctxt['verbose'] = \ | 948 | ctxt['verbose'] = \ |
1619 | 880 | False if config('verbose') is None else config('verbose') | 949 | False if config('verbose') is None else config('verbose') |
1620 | 950 | |||
1621 | 881 | return ctxt | 951 | return ctxt |
1622 | 882 | 952 | ||
1623 | 883 | 953 | ||
1624 | 884 | class SyslogContext(OSContextGenerator): | 954 | class SyslogContext(OSContextGenerator): |
1625 | 885 | 955 | ||
1626 | 886 | def __call__(self): | 956 | def __call__(self): |
1630 | 887 | ctxt = { | 957 | ctxt = {'use_syslog': config('use-syslog')} |
1628 | 888 | 'use_syslog': config('use-syslog') | ||
1629 | 889 | } | ||
1631 | 890 | return ctxt | 958 | return ctxt |
1632 | 891 | 959 | ||
1633 | 892 | 960 | ||
1634 | @@ -894,10 +962,56 @@ | |||
1635 | 894 | 962 | ||
1636 | 895 | def __call__(self): | 963 | def __call__(self): |
1637 | 896 | if config('prefer-ipv6'): | 964 | if config('prefer-ipv6'): |
1641 | 897 | return { | 965 | return {'bind_host': '::'} |
1639 | 898 | 'bind_host': '::' | ||
1640 | 899 | } | ||
1642 | 900 | else: | 966 | else: |
1646 | 901 | return { | 967 | return {'bind_host': '0.0.0.0'} |
1647 | 902 | 'bind_host': '0.0.0.0' | 968 | |
1648 | 903 | } | 969 | |
1649 | 970 | class WorkerConfigContext(OSContextGenerator): | ||
1650 | 971 | |||
1651 | 972 | @property | ||
1652 | 973 | def num_cpus(self): | ||
1653 | 974 | try: | ||
1654 | 975 | from psutil import NUM_CPUS | ||
1655 | 976 | except ImportError: | ||
1656 | 977 | apt_install('python-psutil', fatal=True) | ||
1657 | 978 | from psutil import NUM_CPUS | ||
1658 | 979 | |||
1659 | 980 | return NUM_CPUS | ||
1660 | 981 | |||
1661 | 982 | def __call__(self): | ||
1662 | 983 | multiplier = config('worker-multiplier') or 0 | ||
1663 | 984 | ctxt = {"workers": self.num_cpus * multiplier} | ||
1664 | 985 | return ctxt | ||
1665 | 986 | |||
1666 | 987 | |||
1667 | 988 | class ZeroMQContext(OSContextGenerator): | ||
1668 | 989 | interfaces = ['zeromq-configuration'] | ||
1669 | 990 | |||
1670 | 991 | def __call__(self): | ||
1671 | 992 | ctxt = {} | ||
1672 | 993 | if is_relation_made('zeromq-configuration', 'host'): | ||
1673 | 994 | for rid in relation_ids('zeromq-configuration'): | ||
1674 | 995 | for unit in related_units(rid): | ||
1675 | 996 | ctxt['zmq_nonce'] = relation_get('nonce', unit, rid) | ||
1676 | 997 | ctxt['zmq_host'] = relation_get('host', unit, rid) | ||
1677 | 998 | |||
1678 | 999 | return ctxt | ||
1679 | 1000 | |||
1680 | 1001 | |||
1681 | 1002 | class NotificationDriverContext(OSContextGenerator): | ||
1682 | 1003 | |||
1683 | 1004 | def __init__(self, zmq_relation='zeromq-configuration', | ||
1684 | 1005 | amqp_relation='amqp'): | ||
1685 | 1006 | """ | ||
1686 | 1007 | :param zmq_relation: Name of Zeromq relation to check | ||
1687 | 1008 | """ | ||
1688 | 1009 | self.zmq_relation = zmq_relation | ||
1689 | 1010 | self.amqp_relation = amqp_relation | ||
1690 | 1011 | |||
1691 | 1012 | def __call__(self): | ||
1692 | 1013 | ctxt = {'notifications': 'False'} | ||
1693 | 1014 | if is_relation_made(self.amqp_relation): | ||
1694 | 1015 | ctxt['notifications'] = "True" | ||
1695 | 1016 | |||
1696 | 1017 | return ctxt | ||
1697 | 904 | 1018 | ||
1698 | === modified file 'hooks/charmhelpers/contrib/openstack/ip.py' | |||
1699 | --- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:22:04 +0000 | |||
1700 | +++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-10 07:57:54 +0000 | |||
1701 | @@ -2,21 +2,19 @@ | |||
1702 | 2 | config, | 2 | config, |
1703 | 3 | unit_get, | 3 | unit_get, |
1704 | 4 | ) | 4 | ) |
1705 | 5 | |||
1706 | 6 | from charmhelpers.contrib.network.ip import ( | 5 | from charmhelpers.contrib.network.ip import ( |
1707 | 7 | get_address_in_network, | 6 | get_address_in_network, |
1708 | 8 | is_address_in_network, | 7 | is_address_in_network, |
1709 | 9 | is_ipv6, | 8 | is_ipv6, |
1710 | 10 | get_ipv6_addr, | 9 | get_ipv6_addr, |
1711 | 11 | ) | 10 | ) |
1712 | 12 | |||
1713 | 13 | from charmhelpers.contrib.hahelpers.cluster import is_clustered | 11 | from charmhelpers.contrib.hahelpers.cluster import is_clustered |
1714 | 14 | 12 | ||
1715 | 15 | PUBLIC = 'public' | 13 | PUBLIC = 'public' |
1716 | 16 | INTERNAL = 'int' | 14 | INTERNAL = 'int' |
1717 | 17 | ADMIN = 'admin' | 15 | ADMIN = 'admin' |
1718 | 18 | 16 | ||
1720 | 19 | _address_map = { | 17 | ADDRESS_MAP = { |
1721 | 20 | PUBLIC: { | 18 | PUBLIC: { |
1722 | 21 | 'config': 'os-public-network', | 19 | 'config': 'os-public-network', |
1723 | 22 | 'fallback': 'public-address' | 20 | 'fallback': 'public-address' |
1724 | @@ -33,16 +31,14 @@ | |||
1725 | 33 | 31 | ||
1726 | 34 | 32 | ||
1727 | 35 | def canonical_url(configs, endpoint_type=PUBLIC): | 33 | def canonical_url(configs, endpoint_type=PUBLIC): |
1730 | 36 | ''' | 34 | """Returns the correct HTTP URL to this host given the state of HTTPS |
1729 | 37 | Returns the correct HTTP URL to this host given the state of HTTPS | ||
1731 | 38 | configuration, hacluster and charm configuration. | 35 | configuration, hacluster and charm configuration. |
1732 | 39 | 36 | ||
1739 | 40 | :configs OSTemplateRenderer: A config tempating object to inspect for | 37 | :param configs: OSTemplateRenderer config templating object to inspect |
1740 | 41 | a complete https context. | 38 | for a complete https context. |
1741 | 42 | :endpoint_type str: The endpoint type to resolve. | 39 | :param endpoint_type: str endpoint type to resolve. |
1742 | 43 | 40 | :param returns: str base URL for services on the current service unit. | |
1743 | 44 | :returns str: Base URL for services on the current service unit. | 41 | """ |
1738 | 45 | ''' | ||
1744 | 46 | scheme = 'http' | 42 | scheme = 'http' |
1745 | 47 | if 'https' in configs.complete_contexts(): | 43 | if 'https' in configs.complete_contexts(): |
1746 | 48 | scheme = 'https' | 44 | scheme = 'https' |
1747 | @@ -53,27 +49,45 @@ | |||
1748 | 53 | 49 | ||
1749 | 54 | 50 | ||
1750 | 55 | def resolve_address(endpoint_type=PUBLIC): | 51 | def resolve_address(endpoint_type=PUBLIC): |
1751 | 52 | """Return unit address depending on net config. | ||
1752 | 53 | |||
1753 | 54 | If unit is clustered with vip(s) and has net splits defined, return vip on | ||
1754 | 55 | correct network. If clustered with no nets defined, return primary vip. | ||
1755 | 56 | |||
1756 | 57 | If not clustered, return unit address ensuring address is on configured net | ||
1757 | 58 | split if one is configured. | ||
1758 | 59 | |||
1759 | 60 | :param endpoint_type: Network endpoing type | ||
1760 | 61 | """ | ||
1761 | 56 | resolved_address = None | 62 | resolved_address = None |
1766 | 57 | if is_clustered(): | 63 | vips = config('vip') |
1767 | 58 | if config(_address_map[endpoint_type]['config']) is None: | 64 | if vips: |
1768 | 59 | # Assume vip is simple and pass back directly | 65 | vips = vips.split() |
1769 | 60 | resolved_address = config('vip') | 66 | |
1770 | 67 | net_type = ADDRESS_MAP[endpoint_type]['config'] | ||
1771 | 68 | net_addr = config(net_type) | ||
1772 | 69 | net_fallback = ADDRESS_MAP[endpoint_type]['fallback'] | ||
1773 | 70 | clustered = is_clustered() | ||
1774 | 71 | if clustered: | ||
1775 | 72 | if not net_addr: | ||
1776 | 73 | # If no net-splits defined, we expect a single vip | ||
1777 | 74 | resolved_address = vips[0] | ||
1778 | 61 | else: | 75 | else: |
1783 | 62 | for vip in config('vip').split(): | 76 | for vip in vips: |
1784 | 63 | if is_address_in_network( | 77 | if is_address_in_network(net_addr, vip): |
1781 | 64 | config(_address_map[endpoint_type]['config']), | ||
1782 | 65 | vip): | ||
1785 | 66 | resolved_address = vip | 78 | resolved_address = vip |
1786 | 79 | break | ||
1787 | 67 | else: | 80 | else: |
1788 | 68 | if config('prefer-ipv6'): | 81 | if config('prefer-ipv6'): |
1790 | 69 | fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0] | 82 | fallback_addr = get_ipv6_addr(exc_list=vips)[0] |
1791 | 70 | else: | 83 | else: |
1795 | 71 | fallback_addr = unit_get(_address_map[endpoint_type]['fallback']) | 84 | fallback_addr = unit_get(net_fallback) |
1796 | 72 | resolved_address = get_address_in_network( | 85 | |
1797 | 73 | config(_address_map[endpoint_type]['config']), fallback_addr) | 86 | resolved_address = get_address_in_network(net_addr, fallback_addr) |
1798 | 74 | 87 | ||
1799 | 75 | if resolved_address is None: | 88 | if resolved_address is None: |
1804 | 76 | raise ValueError('Unable to resolve a suitable IP address' | 89 | raise ValueError("Unable to resolve a suitable IP address based on " |
1805 | 77 | ' based on charm state and configuration') | 90 | "charm state and configuration. (net_type=%s, " |
1806 | 78 | else: | 91 | "clustered=%s)" % (net_type, clustered)) |
1807 | 79 | return resolved_address | 92 | |
1808 | 93 | return resolved_address | ||
1809 | 80 | 94 | ||
1810 | === modified file 'hooks/charmhelpers/contrib/openstack/neutron.py' | |||
1811 | --- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:38:51 +0000 | |||
1812 | +++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-10 07:57:54 +0000 | |||
1813 | @@ -14,7 +14,7 @@ | |||
1814 | 14 | def headers_package(): | 14 | def headers_package(): |
1815 | 15 | """Ensures correct linux-headers for running kernel are installed, | 15 | """Ensures correct linux-headers for running kernel are installed, |
1816 | 16 | for building DKMS package""" | 16 | for building DKMS package""" |
1818 | 17 | kver = check_output(['uname', '-r']).strip() | 17 | kver = check_output(['uname', '-r']).decode('UTF-8').strip() |
1819 | 18 | return 'linux-headers-%s' % kver | 18 | return 'linux-headers-%s' % kver |
1820 | 19 | 19 | ||
1821 | 20 | QUANTUM_CONF_DIR = '/etc/quantum' | 20 | QUANTUM_CONF_DIR = '/etc/quantum' |
1822 | @@ -22,7 +22,7 @@ | |||
1823 | 22 | 22 | ||
1824 | 23 | def kernel_version(): | 23 | def kernel_version(): |
1825 | 24 | """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ | 24 | """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """ |
1827 | 25 | kver = check_output(['uname', '-r']).strip() | 25 | kver = check_output(['uname', '-r']).decode('UTF-8').strip() |
1828 | 26 | kver = kver.split('.') | 26 | kver = kver.split('.') |
1829 | 27 | return (int(kver[0]), int(kver[1])) | 27 | return (int(kver[0]), int(kver[1])) |
1830 | 28 | 28 | ||
1831 | @@ -138,10 +138,25 @@ | |||
1832 | 138 | relation_prefix='neutron', | 138 | relation_prefix='neutron', |
1833 | 139 | ssl_dir=NEUTRON_CONF_DIR)], | 139 | ssl_dir=NEUTRON_CONF_DIR)], |
1834 | 140 | 'services': [], | 140 | 'services': [], |
1836 | 141 | 'packages': [['neutron-plugin-cisco']], | 141 | 'packages': [[headers_package()] + determine_dkms_package(), |
1837 | 142 | ['neutron-plugin-cisco']], | ||
1838 | 142 | 'server_packages': ['neutron-server', | 143 | 'server_packages': ['neutron-server', |
1839 | 143 | 'neutron-plugin-cisco'], | 144 | 'neutron-plugin-cisco'], |
1840 | 144 | 'server_services': ['neutron-server'] | 145 | 'server_services': ['neutron-server'] |
1841 | 146 | }, | ||
1842 | 147 | 'Calico': { | ||
1843 | 148 | 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini', | ||
1844 | 149 | 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin', | ||
1845 | 150 | 'contexts': [ | ||
1846 | 151 | context.SharedDBContext(user=config('neutron-database-user'), | ||
1847 | 152 | database=config('neutron-database'), | ||
1848 | 153 | relation_prefix='neutron', | ||
1849 | 154 | ssl_dir=NEUTRON_CONF_DIR)], | ||
1850 | 155 | 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'], | ||
1851 | 156 | 'packages': [[headers_package()] + determine_dkms_package(), | ||
1852 | 157 | ['calico-compute', 'bird', 'neutron-dhcp-agent']], | ||
1853 | 158 | 'server_packages': ['neutron-server', 'calico-control'], | ||
1854 | 159 | 'server_services': ['neutron-server'] | ||
1855 | 145 | } | 160 | } |
1856 | 146 | } | 161 | } |
1857 | 147 | if release >= 'icehouse': | 162 | if release >= 'icehouse': |
1858 | @@ -162,7 +177,8 @@ | |||
1859 | 162 | elif manager == 'neutron': | 177 | elif manager == 'neutron': |
1860 | 163 | plugins = neutron_plugins() | 178 | plugins = neutron_plugins() |
1861 | 164 | else: | 179 | else: |
1863 | 165 | log('Error: Network manager does not support plugins.') | 180 | log("Network manager '%s' does not support plugins." % (manager), |
1864 | 181 | level=ERROR) | ||
1865 | 166 | raise Exception | 182 | raise Exception |
1866 | 167 | 183 | ||
1867 | 168 | try: | 184 | try: |
1868 | 169 | 185 | ||
1869 | === modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg' | |||
1870 | --- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 21:57:43 +0000 | |||
1871 | +++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-10 07:57:54 +0000 | |||
1872 | @@ -35,7 +35,7 @@ | |||
1873 | 35 | stats auth admin:password | 35 | stats auth admin:password |
1874 | 36 | 36 | ||
1875 | 37 | {% if frontends -%} | 37 | {% if frontends -%} |
1877 | 38 | {% for service, ports in service_ports.iteritems() -%} | 38 | {% for service, ports in service_ports.items() -%} |
1878 | 39 | frontend tcp-in_{{ service }} | 39 | frontend tcp-in_{{ service }} |
1879 | 40 | bind *:{{ ports[0] }} | 40 | bind *:{{ ports[0] }} |
1880 | 41 | bind :::{{ ports[0] }} | 41 | bind :::{{ ports[0] }} |
1881 | @@ -46,7 +46,7 @@ | |||
1882 | 46 | {% for frontend in frontends -%} | 46 | {% for frontend in frontends -%} |
1883 | 47 | backend {{ service }}_{{ frontend }} | 47 | backend {{ service }}_{{ frontend }} |
1884 | 48 | balance leastconn | 48 | balance leastconn |
1886 | 49 | {% for unit, address in frontends[frontend]['backends'].iteritems() -%} | 49 | {% for unit, address in frontends[frontend]['backends'].items() -%} |
1887 | 50 | server {{ unit }} {{ address }}:{{ ports[1] }} check | 50 | server {{ unit }} {{ address }}:{{ ports[1] }} check |
1888 | 51 | {% endfor %} | 51 | {% endfor %} |
1889 | 52 | {% endfor -%} | 52 | {% endfor -%} |
1890 | 53 | 53 | ||
1891 | === modified file 'hooks/charmhelpers/contrib/openstack/templating.py' | |||
1892 | --- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-28 14:38:51 +0000 | |||
1893 | +++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-10 07:57:54 +0000 | |||
1894 | @@ -1,13 +1,13 @@ | |||
1895 | 1 | import os | 1 | import os |
1896 | 2 | 2 | ||
1897 | 3 | import six | ||
1898 | 4 | |||
1899 | 3 | from charmhelpers.fetch import apt_install | 5 | from charmhelpers.fetch import apt_install |
1900 | 4 | |||
1901 | 5 | from charmhelpers.core.hookenv import ( | 6 | from charmhelpers.core.hookenv import ( |
1902 | 6 | log, | 7 | log, |
1903 | 7 | ERROR, | 8 | ERROR, |
1904 | 8 | INFO | 9 | INFO |
1905 | 9 | ) | 10 | ) |
1906 | 10 | |||
1907 | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES | 11 | from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES |
1908 | 12 | 12 | ||
1909 | 13 | try: | 13 | try: |
1910 | @@ -43,7 +43,7 @@ | |||
1911 | 43 | order by OpenStack release. | 43 | order by OpenStack release. |
1912 | 44 | """ | 44 | """ |
1913 | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) | 45 | tmpl_dirs = [(rel, os.path.join(templates_dir, rel)) |
1915 | 46 | for rel in OPENSTACK_CODENAMES.itervalues()] | 46 | for rel in six.itervalues(OPENSTACK_CODENAMES)] |
1916 | 47 | 47 | ||
1917 | 48 | if not os.path.isdir(templates_dir): | 48 | if not os.path.isdir(templates_dir): |
1918 | 49 | log('Templates directory not found @ %s.' % templates_dir, | 49 | log('Templates directory not found @ %s.' % templates_dir, |
1919 | @@ -258,7 +258,7 @@ | |||
1920 | 258 | """ | 258 | """ |
1921 | 259 | Write out all registered config files. | 259 | Write out all registered config files. |
1922 | 260 | """ | 260 | """ |
1924 | 261 | [self.write(k) for k in self.templates.iterkeys()] | 261 | [self.write(k) for k in six.iterkeys(self.templates)] |
1925 | 262 | 262 | ||
1926 | 263 | def set_release(self, openstack_release): | 263 | def set_release(self, openstack_release): |
1927 | 264 | """ | 264 | """ |
1928 | @@ -275,5 +275,5 @@ | |||
1929 | 275 | ''' | 275 | ''' |
1930 | 276 | interfaces = [] | 276 | interfaces = [] |
1931 | 277 | [interfaces.extend(i.complete_contexts()) | 277 | [interfaces.extend(i.complete_contexts()) |
1933 | 278 | for i in self.templates.itervalues()] | 278 | for i in six.itervalues(self.templates)] |
1934 | 279 | return interfaces | 279 | return interfaces |
1935 | 280 | 280 | ||
1936 | === modified file 'hooks/charmhelpers/contrib/openstack/utils.py' | |||
1937 | --- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:57:43 +0000 | |||
1938 | +++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-10 07:57:54 +0000 | |||
1939 | @@ -2,6 +2,7 @@ | |||
1940 | 2 | 2 | ||
1941 | 3 | # Common python helper functions used for OpenStack charms. | 3 | # Common python helper functions used for OpenStack charms. |
1942 | 4 | from collections import OrderedDict | 4 | from collections import OrderedDict |
1943 | 5 | from functools import wraps | ||
1944 | 5 | 6 | ||
1945 | 6 | import subprocess | 7 | import subprocess |
1946 | 7 | import json | 8 | import json |
1947 | @@ -9,11 +10,13 @@ | |||
1948 | 9 | import socket | 10 | import socket |
1949 | 10 | import sys | 11 | import sys |
1950 | 11 | 12 | ||
1951 | 13 | import six | ||
1952 | 14 | import yaml | ||
1953 | 15 | |||
1954 | 12 | from charmhelpers.core.hookenv import ( | 16 | from charmhelpers.core.hookenv import ( |
1955 | 13 | config, | 17 | config, |
1956 | 14 | log as juju_log, | 18 | log as juju_log, |
1957 | 15 | charm_dir, | 19 | charm_dir, |
1958 | 16 | ERROR, | ||
1959 | 17 | INFO, | 20 | INFO, |
1960 | 18 | relation_ids, | 21 | relation_ids, |
1961 | 19 | relation_set | 22 | relation_set |
1962 | @@ -30,7 +33,8 @@ | |||
1963 | 30 | ) | 33 | ) |
1964 | 31 | 34 | ||
1965 | 32 | from charmhelpers.core.host import lsb_release, mounts, umount | 35 | from charmhelpers.core.host import lsb_release, mounts, umount |
1967 | 33 | from charmhelpers.fetch import apt_install, apt_cache | 36 | from charmhelpers.fetch import apt_install, apt_cache, install_remote |
1968 | 37 | from charmhelpers.contrib.python.packages import pip_install | ||
1969 | 34 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk | 38 | from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk |
1970 | 35 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device | 39 | from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device |
1971 | 36 | 40 | ||
1972 | @@ -112,7 +116,7 @@ | |||
1973 | 112 | 116 | ||
1974 | 113 | # Best guess match based on deb string provided | 117 | # Best guess match based on deb string provided |
1975 | 114 | if src.startswith('deb') or src.startswith('ppa'): | 118 | if src.startswith('deb') or src.startswith('ppa'): |
1977 | 115 | for k, v in OPENSTACK_CODENAMES.iteritems(): | 119 | for k, v in six.iteritems(OPENSTACK_CODENAMES): |
1978 | 116 | if v in src: | 120 | if v in src: |
1979 | 117 | return v | 121 | return v |
1980 | 118 | 122 | ||
1981 | @@ -133,7 +137,7 @@ | |||
1982 | 133 | 137 | ||
1983 | 134 | def get_os_version_codename(codename): | 138 | def get_os_version_codename(codename): |
1984 | 135 | '''Determine OpenStack version number from codename.''' | 139 | '''Determine OpenStack version number from codename.''' |
1986 | 136 | for k, v in OPENSTACK_CODENAMES.iteritems(): | 140 | for k, v in six.iteritems(OPENSTACK_CODENAMES): |
1987 | 137 | if v == codename: | 141 | if v == codename: |
1988 | 138 | return k | 142 | return k |
1989 | 139 | e = 'Could not derive OpenStack version for '\ | 143 | e = 'Could not derive OpenStack version for '\ |
1990 | @@ -193,7 +197,7 @@ | |||
1991 | 193 | else: | 197 | else: |
1992 | 194 | vers_map = OPENSTACK_CODENAMES | 198 | vers_map = OPENSTACK_CODENAMES |
1993 | 195 | 199 | ||
1995 | 196 | for version, cname in vers_map.iteritems(): | 200 | for version, cname in six.iteritems(vers_map): |
1996 | 197 | if cname == codename: | 201 | if cname == codename: |
1997 | 198 | return version | 202 | return version |
1998 | 199 | # e = "Could not determine OpenStack version for package: %s" % pkg | 203 | # e = "Could not determine OpenStack version for package: %s" % pkg |
1999 | @@ -317,7 +321,7 @@ | |||
2000 | 317 | rc_script.write( | 321 | rc_script.write( |
2001 | 318 | "#!/bin/bash\n") | 322 | "#!/bin/bash\n") |
2002 | 319 | [rc_script.write('export %s=%s\n' % (u, p)) | 323 | [rc_script.write('export %s=%s\n' % (u, p)) |
2004 | 320 | for u, p in env_vars.iteritems() if u != "script_path"] | 324 | for u, p in six.iteritems(env_vars) if u != "script_path"] |
2005 | 321 | 325 | ||
2006 | 322 | 326 | ||
2007 | 323 | def openstack_upgrade_available(package): | 327 | def openstack_upgrade_available(package): |
2008 | @@ -350,8 +354,8 @@ | |||
2009 | 350 | ''' | 354 | ''' |
2010 | 351 | _none = ['None', 'none', None] | 355 | _none = ['None', 'none', None] |
2011 | 352 | if (block_device in _none): | 356 | if (block_device in _none): |
2014 | 353 | error_out('prepare_storage(): Missing required input: ' | 357 | error_out('prepare_storage(): Missing required input: block_device=%s.' |
2015 | 354 | 'block_device=%s.' % block_device, level=ERROR) | 358 | % block_device) |
2016 | 355 | 359 | ||
2017 | 356 | if block_device.startswith('/dev/'): | 360 | if block_device.startswith('/dev/'): |
2018 | 357 | bdev = block_device | 361 | bdev = block_device |
2019 | @@ -367,8 +371,7 @@ | |||
2020 | 367 | bdev = '/dev/%s' % block_device | 371 | bdev = '/dev/%s' % block_device |
2021 | 368 | 372 | ||
2022 | 369 | if not is_block_device(bdev): | 373 | if not is_block_device(bdev): |
2025 | 370 | error_out('Failed to locate valid block device at %s' % bdev, | 374 | error_out('Failed to locate valid block device at %s' % bdev) |
2024 | 371 | level=ERROR) | ||
2026 | 372 | 375 | ||
2027 | 373 | return bdev | 376 | return bdev |
2028 | 374 | 377 | ||
2029 | @@ -417,7 +420,7 @@ | |||
2030 | 417 | 420 | ||
2031 | 418 | if isinstance(address, dns.name.Name): | 421 | if isinstance(address, dns.name.Name): |
2032 | 419 | rtype = 'PTR' | 422 | rtype = 'PTR' |
2034 | 420 | elif isinstance(address, basestring): | 423 | elif isinstance(address, six.string_types): |
2035 | 421 | rtype = 'A' | 424 | rtype = 'A' |
2036 | 422 | else: | 425 | else: |
2037 | 423 | return None | 426 | return None |
2038 | @@ -468,6 +471,14 @@ | |||
2039 | 468 | return result.split('.')[0] | 471 | return result.split('.')[0] |
2040 | 469 | 472 | ||
2041 | 470 | 473 | ||
2042 | 474 | def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'): | ||
2043 | 475 | mm_map = {} | ||
2044 | 476 | if os.path.isfile(mm_file): | ||
2045 | 477 | with open(mm_file, 'r') as f: | ||
2046 | 478 | mm_map = json.load(f) | ||
2047 | 479 | return mm_map | ||
2048 | 480 | |||
2049 | 481 | |||
2050 | 471 | def sync_db_with_multi_ipv6_addresses(database, database_user, | 482 | def sync_db_with_multi_ipv6_addresses(database, database_user, |
2051 | 472 | relation_prefix=None): | 483 | relation_prefix=None): |
2052 | 473 | hosts = get_ipv6_addr(dynamic_only=False) | 484 | hosts = get_ipv6_addr(dynamic_only=False) |
2053 | @@ -477,10 +488,132 @@ | |||
2054 | 477 | 'hostname': json.dumps(hosts)} | 488 | 'hostname': json.dumps(hosts)} |
2055 | 478 | 489 | ||
2056 | 479 | if relation_prefix: | 490 | if relation_prefix: |
2059 | 480 | keys = kwargs.keys() | 491 | for key in list(kwargs.keys()): |
2058 | 481 | for key in keys: | ||
2060 | 482 | kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] | 492 | kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key] |
2061 | 483 | del kwargs[key] | 493 | del kwargs[key] |
2062 | 484 | 494 | ||
2063 | 485 | for rid in relation_ids('shared-db'): | 495 | for rid in relation_ids('shared-db'): |
2064 | 486 | relation_set(relation_id=rid, **kwargs) | 496 | relation_set(relation_id=rid, **kwargs) |
2065 | 497 | |||
2066 | 498 | |||
2067 | 499 | def os_requires_version(ostack_release, pkg): | ||
2068 | 500 | """ | ||
2069 | 501 | Decorator for hook to specify minimum supported release | ||
2070 | 502 | """ | ||
2071 | 503 | def wrap(f): | ||
2072 | 504 | @wraps(f) | ||
2073 | 505 | def wrapped_f(*args): | ||
2074 | 506 | if os_release(pkg) < ostack_release: | ||
2075 | 507 | raise Exception("This hook is not supported on releases" | ||
2076 | 508 | " before %s" % ostack_release) | ||
2077 | 509 | f(*args) | ||
2078 | 510 | return wrapped_f | ||
2079 | 511 | return wrap | ||
2080 | 512 | |||
2081 | 513 | |||
2082 | 514 | def git_install_requested(): | ||
2083 | 515 | """Returns true if openstack-origin-git is specified.""" | ||
2084 | 516 | return config('openstack-origin-git') != "None" | ||
2085 | 517 | |||
2086 | 518 | |||
2087 | 519 | requirements_dir = None | ||
2088 | 520 | |||
2089 | 521 | |||
2090 | 522 | def git_clone_and_install(file_name, core_project): | ||
2091 | 523 | """Clone/install all OpenStack repos specified in yaml config file.""" | ||
2092 | 524 | global requirements_dir | ||
2093 | 525 | |||
2094 | 526 | if file_name == "None": | ||
2095 | 527 | return | ||
2096 | 528 | |||
2097 | 529 | yaml_file = os.path.join(charm_dir(), file_name) | ||
2098 | 530 | |||
2099 | 531 | # clone/install the requirements project first | ||
2100 | 532 | installed = _git_clone_and_install_subset(yaml_file, | ||
2101 | 533 | whitelist=['requirements']) | ||
2102 | 534 | if 'requirements' not in installed: | ||
2103 | 535 | error_out('requirements git repository must be specified') | ||
2104 | 536 | |||
2105 | 537 | # clone/install all other projects except requirements and the core project | ||
2106 | 538 | blacklist = ['requirements', core_project] | ||
2107 | 539 | _git_clone_and_install_subset(yaml_file, blacklist=blacklist, | ||
2108 | 540 | update_requirements=True) | ||
2109 | 541 | |||
2110 | 542 | # clone/install the core project | ||
2111 | 543 | whitelist = [core_project] | ||
2112 | 544 | installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist, | ||
2113 | 545 | update_requirements=True) | ||
2114 | 546 | if core_project not in installed: | ||
2115 | 547 | error_out('{} git repository must be specified'.format(core_project)) | ||
2116 | 548 | |||
2117 | 549 | |||
2118 | 550 | def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[], | ||
2119 | 551 | update_requirements=False): | ||
2120 | 552 | """Clone/install subset of OpenStack repos specified in yaml config file.""" | ||
2121 | 553 | global requirements_dir | ||
2122 | 554 | installed = [] | ||
2123 | 555 | |||
2124 | 556 | with open(yaml_file, 'r') as fd: | ||
2125 | 557 | projects = yaml.load(fd) | ||
2126 | 558 | for proj, val in projects.items(): | ||
2127 | 559 | # The project subset is chosen based on the following 3 rules: | ||
2128 | 560 | # 1) If project is in blacklist, we don't clone/install it, period. | ||
2129 | 561 | # 2) If whitelist is empty, we clone/install everything else. | ||
2130 | 562 | # 3) If whitelist is not empty, we clone/install everything in the | ||
2131 | 563 | # whitelist. | ||
2132 | 564 | if proj in blacklist: | ||
2133 | 565 | continue | ||
2134 | 566 | if whitelist and proj not in whitelist: | ||
2135 | 567 | continue | ||
2136 | 568 | repo = val['repository'] | ||
2137 | 569 | branch = val['branch'] | ||
2138 | 570 | repo_dir = _git_clone_and_install_single(repo, branch, | ||
2139 | 571 | update_requirements) | ||
2140 | 572 | if proj == 'requirements': | ||
2141 | 573 | requirements_dir = repo_dir | ||
2142 | 574 | installed.append(proj) | ||
2143 | 575 | return installed | ||
2144 | 576 | |||
2145 | 577 | |||
2146 | 578 | def _git_clone_and_install_single(repo, branch, update_requirements=False): | ||
2147 | 579 | """Clone and install a single git repository.""" | ||
2148 | 580 | dest_parent_dir = "/mnt/openstack-git/" | ||
2149 | 581 | dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo)) | ||
2150 | 582 | |||
2151 | 583 | if not os.path.exists(dest_parent_dir): | ||
2152 | 584 | juju_log('Host dir not mounted at {}. ' | ||
2153 | 585 | 'Creating directory there instead.'.format(dest_parent_dir)) | ||
2154 | 586 | os.mkdir(dest_parent_dir) | ||
2155 | 587 | |||
2156 | 588 | if not os.path.exists(dest_dir): | ||
2157 | 589 | juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch)) | ||
2158 | 590 | repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch) | ||
2159 | 591 | else: | ||
2160 | 592 | repo_dir = dest_dir | ||
2161 | 593 | |||
2162 | 594 | if update_requirements: | ||
2163 | 595 | if not requirements_dir: | ||
2164 | 596 | error_out('requirements repo must be cloned before ' | ||
2165 | 597 | 'updating from global requirements.') | ||
2166 | 598 | _git_update_requirements(repo_dir, requirements_dir) | ||
2167 | 599 | |||
2168 | 600 | juju_log('Installing git repo from dir: {}'.format(repo_dir)) | ||
2169 | 601 | pip_install(repo_dir) | ||
2170 | 602 | |||
2171 | 603 | return repo_dir | ||
2172 | 604 | |||
2173 | 605 | |||
2174 | 606 | def _git_update_requirements(package_dir, reqs_dir): | ||
2175 | 607 | """Update from global requirements. | ||
2176 | 608 | |||
2177 | 609 | Update an OpenStack git directory's requirements.txt and | ||
2178 | 610 | test-requirements.txt from global-requirements.txt.""" | ||
2179 | 611 | orig_dir = os.getcwd() | ||
2180 | 612 | os.chdir(reqs_dir) | ||
2181 | 613 | cmd = "python update.py {}".format(package_dir) | ||
2182 | 614 | try: | ||
2183 | 615 | subprocess.check_call(cmd.split(' ')) | ||
2184 | 616 | except subprocess.CalledProcessError: | ||
2185 | 617 | package = os.path.basename(package_dir) | ||
2186 | 618 | error_out("Error updating {} from global-requirements.txt".format(package)) | ||
2187 | 619 | os.chdir(orig_dir) | ||
2188 | 487 | 620 | ||
2189 | === added directory 'hooks/charmhelpers/contrib/python' | |||
2190 | === added file 'hooks/charmhelpers/contrib/python/__init__.py' | |||
2191 | === added file 'hooks/charmhelpers/contrib/python/debug.py' | |||
2192 | --- hooks/charmhelpers/contrib/python/debug.py 1970-01-01 00:00:00 +0000 | |||
2193 | +++ hooks/charmhelpers/contrib/python/debug.py 2014-12-10 07:57:54 +0000 | |||
2194 | @@ -0,0 +1,40 @@ | |||
2195 | 1 | #!/usr/bin/env python | ||
2196 | 2 | # coding: utf-8 | ||
2197 | 3 | |||
2198 | 4 | from __future__ import print_function | ||
2199 | 5 | |||
2200 | 6 | __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" | ||
2201 | 7 | |||
2202 | 8 | import atexit | ||
2203 | 9 | import sys | ||
2204 | 10 | |||
2205 | 11 | from charmhelpers.contrib.python.rpdb import Rpdb | ||
2206 | 12 | from charmhelpers.core.hookenv import ( | ||
2207 | 13 | open_port, | ||
2208 | 14 | close_port, | ||
2209 | 15 | ERROR, | ||
2210 | 16 | log | ||
2211 | 17 | ) | ||
2212 | 18 | |||
2213 | 19 | DEFAULT_ADDR = "0.0.0.0" | ||
2214 | 20 | DEFAULT_PORT = 4444 | ||
2215 | 21 | |||
2216 | 22 | |||
2217 | 23 | def _error(message): | ||
2218 | 24 | log(message, level=ERROR) | ||
2219 | 25 | |||
2220 | 26 | |||
2221 | 27 | def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT): | ||
2222 | 28 | """ | ||
2223 | 29 | Set a trace point using the remote debugger | ||
2224 | 30 | """ | ||
2225 | 31 | atexit.register(close_port, port) | ||
2226 | 32 | try: | ||
2227 | 33 | log("Starting a remote python debugger session on %s:%s" % (addr, | ||
2228 | 34 | port)) | ||
2229 | 35 | open_port(port) | ||
2230 | 36 | debugger = Rpdb(addr=addr, port=port) | ||
2231 | 37 | debugger.set_trace(sys._getframe().f_back) | ||
2232 | 38 | except: | ||
2233 | 39 | _error("Cannot start a remote debug session on %s:%s" % (addr, | ||
2234 | 40 | port)) | ||
2235 | 0 | 41 | ||
2236 | === added file 'hooks/charmhelpers/contrib/python/packages.py' | |||
2237 | --- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000 | |||
2238 | +++ hooks/charmhelpers/contrib/python/packages.py 2014-12-10 07:57:54 +0000 | |||
2239 | @@ -0,0 +1,77 @@ | |||
2240 | 1 | #!/usr/bin/env python | ||
2241 | 2 | # coding: utf-8 | ||
2242 | 3 | |||
2243 | 4 | __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" | ||
2244 | 5 | |||
2245 | 6 | from charmhelpers.fetch import apt_install, apt_update | ||
2246 | 7 | from charmhelpers.core.hookenv import log | ||
2247 | 8 | |||
2248 | 9 | try: | ||
2249 | 10 | from pip import main as pip_execute | ||
2250 | 11 | except ImportError: | ||
2251 | 12 | apt_update() | ||
2252 | 13 | apt_install('python-pip') | ||
2253 | 14 | from pip import main as pip_execute | ||
2254 | 15 | |||
2255 | 16 | |||
2256 | 17 | def parse_options(given, available): | ||
2257 | 18 | """Given a set of options, check if available""" | ||
2258 | 19 | for key, value in sorted(given.items()): | ||
2259 | 20 | if key in available: | ||
2260 | 21 | yield "--{0}={1}".format(key, value) | ||
2261 | 22 | |||
2262 | 23 | |||
2263 | 24 | def pip_install_requirements(requirements, **options): | ||
2264 | 25 | """Install a requirements file """ | ||
2265 | 26 | command = ["install"] | ||
2266 | 27 | |||
2267 | 28 | available_options = ('proxy', 'src', 'log', ) | ||
2268 | 29 | for option in parse_options(options, available_options): | ||
2269 | 30 | command.append(option) | ||
2270 | 31 | |||
2271 | 32 | command.append("-r {0}".format(requirements)) | ||
2272 | 33 | log("Installing from file: {} with options: {}".format(requirements, | ||
2273 | 34 | command)) | ||
2274 | 35 | pip_execute(command) | ||
2275 | 36 | |||
2276 | 37 | |||
2277 | 38 | def pip_install(package, fatal=False, **options): | ||
2278 | 39 | """Install a python package""" | ||
2279 | 40 | command = ["install"] | ||
2280 | 41 | |||
2281 | 42 | available_options = ('proxy', 'src', 'log', "index-url", ) | ||
2282 | 43 | for option in parse_options(options, available_options): | ||
2283 | 44 | command.append(option) | ||
2284 | 45 | |||
2285 | 46 | if isinstance(package, list): | ||
2286 | 47 | command.extend(package) | ||
2287 | 48 | else: | ||
2288 | 49 | command.append(package) | ||
2289 | 50 | |||
2290 | 51 | log("Installing {} package with options: {}".format(package, | ||
2291 | 52 | command)) | ||
2292 | 53 | pip_execute(command) | ||
2293 | 54 | |||
2294 | 55 | |||
2295 | 56 | def pip_uninstall(package, **options): | ||
2296 | 57 | """Uninstall a python package""" | ||
2297 | 58 | command = ["uninstall", "-q", "-y"] | ||
2298 | 59 | |||
2299 | 60 | available_options = ('proxy', 'log', ) | ||
2300 | 61 | for option in parse_options(options, available_options): | ||
2301 | 62 | command.append(option) | ||
2302 | 63 | |||
2303 | 64 | if isinstance(package, list): | ||
2304 | 65 | command.extend(package) | ||
2305 | 66 | else: | ||
2306 | 67 | command.append(package) | ||
2307 | 68 | |||
2308 | 69 | log("Uninstalling {} package with options: {}".format(package, | ||
2309 | 70 | command)) | ||
2310 | 71 | pip_execute(command) | ||
2311 | 72 | |||
2312 | 73 | |||
2313 | 74 | def pip_list(): | ||
2314 | 75 | """Returns the list of current python installed packages | ||
2315 | 76 | """ | ||
2316 | 77 | return pip_execute(["list"]) | ||
2317 | 0 | 78 | ||
2318 | === added file 'hooks/charmhelpers/contrib/python/rpdb.py' | |||
2319 | --- hooks/charmhelpers/contrib/python/rpdb.py 1970-01-01 00:00:00 +0000 | |||
2320 | +++ hooks/charmhelpers/contrib/python/rpdb.py 2014-12-10 07:57:54 +0000 | |||
2321 | @@ -0,0 +1,42 @@ | |||
2322 | 1 | """Remote Python Debugger (pdb wrapper).""" | ||
2323 | 2 | |||
2324 | 3 | __author__ = "Bertrand Janin <b@janin.com>" | ||
2325 | 4 | __version__ = "0.1.3" | ||
2326 | 5 | |||
2327 | 6 | import pdb | ||
2328 | 7 | import socket | ||
2329 | 8 | import sys | ||
2330 | 9 | |||
2331 | 10 | |||
2332 | 11 | class Rpdb(pdb.Pdb): | ||
2333 | 12 | |||
2334 | 13 | def __init__(self, addr="127.0.0.1", port=4444): | ||
2335 | 14 | """Initialize the socket and initialize pdb.""" | ||
2336 | 15 | |||
2337 | 16 | # Backup stdin and stdout before replacing them by the socket handle | ||
2338 | 17 | self.old_stdout = sys.stdout | ||
2339 | 18 | self.old_stdin = sys.stdin | ||
2340 | 19 | |||
2341 | 20 | # Open a 'reusable' socket to let the webapp reload on the same port | ||
2342 | 21 | self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) | ||
2343 | 22 | self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) | ||
2344 | 23 | self.skt.bind((addr, port)) | ||
2345 | 24 | self.skt.listen(1) | ||
2346 | 25 | (clientsocket, address) = self.skt.accept() | ||
2347 | 26 | handle = clientsocket.makefile('rw') | ||
2348 | 27 | pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle) | ||
2349 | 28 | sys.stdout = sys.stdin = handle | ||
2350 | 29 | |||
2351 | 30 | def shutdown(self): | ||
2352 | 31 | """Revert stdin and stdout, close the socket.""" | ||
2353 | 32 | sys.stdout = self.old_stdout | ||
2354 | 33 | sys.stdin = self.old_stdin | ||
2355 | 34 | self.skt.close() | ||
2356 | 35 | self.set_continue() | ||
2357 | 36 | |||
2358 | 37 | def do_continue(self, arg): | ||
2359 | 38 | """Stop all operation on ``continue``.""" | ||
2360 | 39 | self.shutdown() | ||
2361 | 40 | return 1 | ||
2362 | 41 | |||
2363 | 42 | do_EOF = do_quit = do_exit = do_c = do_cont = do_continue | ||
2364 | 0 | 43 | ||
2365 | === added file 'hooks/charmhelpers/contrib/python/version.py' | |||
2366 | --- hooks/charmhelpers/contrib/python/version.py 1970-01-01 00:00:00 +0000 | |||
2367 | +++ hooks/charmhelpers/contrib/python/version.py 2014-12-10 07:57:54 +0000 | |||
2368 | @@ -0,0 +1,18 @@ | |||
2369 | 1 | #!/usr/bin/env python | ||
2370 | 2 | # coding: utf-8 | ||
2371 | 3 | |||
2372 | 4 | __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>" | ||
2373 | 5 | |||
2374 | 6 | import sys | ||
2375 | 7 | |||
2376 | 8 | |||
2377 | 9 | def current_version(): | ||
2378 | 10 | """Current system python version""" | ||
2379 | 11 | return sys.version_info | ||
2380 | 12 | |||
2381 | 13 | |||
2382 | 14 | def current_version_string(): | ||
2383 | 15 | """Current system python version as string major.minor.micro""" | ||
2384 | 16 | return "{0}.{1}.{2}".format(sys.version_info.major, | ||
2385 | 17 | sys.version_info.minor, | ||
2386 | 18 | sys.version_info.micro) | ||
2387 | 0 | 19 | ||
2388 | === modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py' | |||
2389 | --- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:38:51 +0000 | |||
2390 | +++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-10 07:57:54 +0000 | |||
2391 | @@ -16,19 +16,18 @@ | |||
2392 | 16 | from subprocess import ( | 16 | from subprocess import ( |
2393 | 17 | check_call, | 17 | check_call, |
2394 | 18 | check_output, | 18 | check_output, |
2396 | 19 | CalledProcessError | 19 | CalledProcessError, |
2397 | 20 | ) | 20 | ) |
2398 | 21 | |||
2399 | 22 | from charmhelpers.core.hookenv import ( | 21 | from charmhelpers.core.hookenv import ( |
2400 | 23 | relation_get, | 22 | relation_get, |
2401 | 24 | relation_ids, | 23 | relation_ids, |
2402 | 25 | related_units, | 24 | related_units, |
2403 | 26 | log, | 25 | log, |
2404 | 26 | DEBUG, | ||
2405 | 27 | INFO, | 27 | INFO, |
2406 | 28 | WARNING, | 28 | WARNING, |
2408 | 29 | ERROR | 29 | ERROR, |
2409 | 30 | ) | 30 | ) |
2410 | 31 | |||
2411 | 32 | from charmhelpers.core.host import ( | 31 | from charmhelpers.core.host import ( |
2412 | 33 | mount, | 32 | mount, |
2413 | 34 | mounts, | 33 | mounts, |
2414 | @@ -37,7 +36,6 @@ | |||
2415 | 37 | service_running, | 36 | service_running, |
2416 | 38 | umount, | 37 | umount, |
2417 | 39 | ) | 38 | ) |
2418 | 40 | |||
2419 | 41 | from charmhelpers.fetch import ( | 39 | from charmhelpers.fetch import ( |
2420 | 42 | apt_install, | 40 | apt_install, |
2421 | 43 | ) | 41 | ) |
2422 | @@ -56,99 +54,85 @@ | |||
2423 | 56 | 54 | ||
2424 | 57 | 55 | ||
2425 | 58 | def install(): | 56 | def install(): |
2427 | 59 | ''' Basic Ceph client installation ''' | 57 | """Basic Ceph client installation.""" |
2428 | 60 | ceph_dir = "/etc/ceph" | 58 | ceph_dir = "/etc/ceph" |
2429 | 61 | if not os.path.exists(ceph_dir): | 59 | if not os.path.exists(ceph_dir): |
2430 | 62 | os.mkdir(ceph_dir) | 60 | os.mkdir(ceph_dir) |
2431 | 61 | |||
2432 | 63 | apt_install('ceph-common', fatal=True) | 62 | apt_install('ceph-common', fatal=True) |
2433 | 64 | 63 | ||
2434 | 65 | 64 | ||
2435 | 66 | def rbd_exists(service, pool, rbd_img): | 65 | def rbd_exists(service, pool, rbd_img): |
2437 | 67 | ''' Check to see if a RADOS block device exists ''' | 66 | """Check to see if a RADOS block device exists.""" |
2438 | 68 | try: | 67 | try: |
2441 | 69 | out = check_output(['rbd', 'list', '--id', service, | 68 | out = check_output(['rbd', 'list', '--id', |
2442 | 70 | '--pool', pool]) | 69 | service, '--pool', pool]).decode('UTF-8') |
2443 | 71 | except CalledProcessError: | 70 | except CalledProcessError: |
2444 | 72 | return False | 71 | return False |
2447 | 73 | else: | 72 | |
2448 | 74 | return rbd_img in out | 73 | return rbd_img in out |
2449 | 75 | 74 | ||
2450 | 76 | 75 | ||
2451 | 77 | def create_rbd_image(service, pool, image, sizemb): | 76 | def create_rbd_image(service, pool, image, sizemb): |
2464 | 78 | ''' Create a new RADOS block device ''' | 77 | """Create a new RADOS block device.""" |
2465 | 79 | cmd = [ | 78 | cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service, |
2466 | 80 | 'rbd', | 79 | '--pool', pool] |
2455 | 81 | 'create', | ||
2456 | 82 | image, | ||
2457 | 83 | '--size', | ||
2458 | 84 | str(sizemb), | ||
2459 | 85 | '--id', | ||
2460 | 86 | service, | ||
2461 | 87 | '--pool', | ||
2462 | 88 | pool | ||
2463 | 89 | ] | ||
2467 | 90 | check_call(cmd) | 80 | check_call(cmd) |
2468 | 91 | 81 | ||
2469 | 92 | 82 | ||
2470 | 93 | def pool_exists(service, name): | 83 | def pool_exists(service, name): |
2472 | 94 | ''' Check to see if a RADOS pool already exists ''' | 84 | """Check to see if a RADOS pool already exists.""" |
2473 | 95 | try: | 85 | try: |
2475 | 96 | out = check_output(['rados', '--id', service, 'lspools']) | 86 | out = check_output(['rados', '--id', service, |
2476 | 87 | 'lspools']).decode('UTF-8') | ||
2477 | 97 | except CalledProcessError: | 88 | except CalledProcessError: |
2478 | 98 | return False | 89 | return False |
2481 | 99 | else: | 90 | |
2482 | 100 | return name in out | 91 | return name in out |
2483 | 101 | 92 | ||
2484 | 102 | 93 | ||
2485 | 103 | def get_osds(service): | 94 | def get_osds(service): |
2490 | 104 | ''' | 95 | """Return a list of all Ceph Object Storage Daemons currently in the |
2491 | 105 | Return a list of all Ceph Object Storage Daemons | 96 | cluster. |
2492 | 106 | currently in the cluster | 97 | """ |
2489 | 107 | ''' | ||
2493 | 108 | version = ceph_version() | 98 | version = ceph_version() |
2494 | 109 | if version and version >= '0.56': | 99 | if version and version >= '0.56': |
2495 | 110 | return json.loads(check_output(['ceph', '--id', service, | 100 | return json.loads(check_output(['ceph', '--id', service, |
2503 | 111 | 'osd', 'ls', '--format=json'])) | 101 | 'osd', 'ls', |
2504 | 112 | else: | 102 | '--format=json']).decode('UTF-8')) |
2505 | 113 | return None | 103 | |
2506 | 114 | 104 | return None | |
2507 | 115 | 105 | ||
2508 | 116 | def create_pool(service, name, replicas=2): | 106 | |
2509 | 117 | ''' Create a new RADOS pool ''' | 107 | def create_pool(service, name, replicas=3): |
2510 | 108 | """Create a new RADOS pool.""" | ||
2511 | 118 | if pool_exists(service, name): | 109 | if pool_exists(service, name): |
2512 | 119 | log("Ceph pool {} already exists, skipping creation".format(name), | 110 | log("Ceph pool {} already exists, skipping creation".format(name), |
2513 | 120 | level=WARNING) | 111 | level=WARNING) |
2514 | 121 | return | 112 | return |
2515 | 113 | |||
2516 | 122 | # Calculate the number of placement groups based | 114 | # Calculate the number of placement groups based |
2517 | 123 | # on upstream recommended best practices. | 115 | # on upstream recommended best practices. |
2518 | 124 | osds = get_osds(service) | 116 | osds = get_osds(service) |
2519 | 125 | if osds: | 117 | if osds: |
2521 | 126 | pgnum = (len(osds) * 100 / replicas) | 118 | pgnum = (len(osds) * 100 // replicas) |
2522 | 127 | else: | 119 | else: |
2523 | 128 | # NOTE(james-page): Default to 200 for older ceph versions | 120 | # NOTE(james-page): Default to 200 for older ceph versions |
2524 | 129 | # which don't support OSD query from cli | 121 | # which don't support OSD query from cli |
2525 | 130 | pgnum = 200 | 122 | pgnum = 200 |
2531 | 131 | cmd = [ | 123 | |
2532 | 132 | 'ceph', '--id', service, | 124 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)] |
2528 | 133 | 'osd', 'pool', 'create', | ||
2529 | 134 | name, str(pgnum) | ||
2530 | 135 | ] | ||
2533 | 136 | check_call(cmd) | 125 | check_call(cmd) |
2539 | 137 | cmd = [ | 126 | |
2540 | 138 | 'ceph', '--id', service, | 127 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size', |
2541 | 139 | 'osd', 'pool', 'set', name, | 128 | str(replicas)] |
2537 | 140 | 'size', str(replicas) | ||
2538 | 141 | ] | ||
2542 | 142 | check_call(cmd) | 129 | check_call(cmd) |
2543 | 143 | 130 | ||
2544 | 144 | 131 | ||
2545 | 145 | def delete_pool(service, name): | 132 | def delete_pool(service, name): |
2552 | 146 | ''' Delete a RADOS pool from ceph ''' | 133 | """Delete a RADOS pool from ceph.""" |
2553 | 147 | cmd = [ | 134 | cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name, |
2554 | 148 | 'ceph', '--id', service, | 135 | '--yes-i-really-really-mean-it'] |
2549 | 149 | 'osd', 'pool', 'delete', | ||
2550 | 150 | name, '--yes-i-really-really-mean-it' | ||
2551 | 151 | ] | ||
2555 | 152 | check_call(cmd) | 136 | check_call(cmd) |
2556 | 153 | 137 | ||
2557 | 154 | 138 | ||
2558 | @@ -161,44 +145,43 @@ | |||
2559 | 161 | 145 | ||
2560 | 162 | 146 | ||
2561 | 163 | def create_keyring(service, key): | 147 | def create_keyring(service, key): |
2563 | 164 | ''' Create a new Ceph keyring containing key''' | 148 | """Create a new Ceph keyring containing key.""" |
2564 | 165 | keyring = _keyring_path(service) | 149 | keyring = _keyring_path(service) |
2565 | 166 | if os.path.exists(keyring): | 150 | if os.path.exists(keyring): |
2567 | 167 | log('ceph: Keyring exists at %s.' % keyring, level=WARNING) | 151 | log('Ceph keyring exists at %s.' % keyring, level=WARNING) |
2568 | 168 | return | 152 | return |
2576 | 169 | cmd = [ | 153 | |
2577 | 170 | 'ceph-authtool', | 154 | cmd = ['ceph-authtool', keyring, '--create-keyring', |
2578 | 171 | keyring, | 155 | '--name=client.{}'.format(service), '--add-key={}'.format(key)] |
2572 | 172 | '--create-keyring', | ||
2573 | 173 | '--name=client.{}'.format(service), | ||
2574 | 174 | '--add-key={}'.format(key) | ||
2575 | 175 | ] | ||
2579 | 176 | check_call(cmd) | 156 | check_call(cmd) |
2581 | 177 | log('ceph: Created new ring at %s.' % keyring, level=INFO) | 157 | log('Created new ceph keyring at %s.' % keyring, level=DEBUG) |
2582 | 178 | 158 | ||
2583 | 179 | 159 | ||
2584 | 180 | def create_key_file(service, key): | 160 | def create_key_file(service, key): |
2586 | 181 | ''' Create a file containing key ''' | 161 | """Create a file containing key.""" |
2587 | 182 | keyfile = _keyfile_path(service) | 162 | keyfile = _keyfile_path(service) |
2588 | 183 | if os.path.exists(keyfile): | 163 | if os.path.exists(keyfile): |
2590 | 184 | log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING) | 164 | log('Keyfile exists at %s.' % keyfile, level=WARNING) |
2591 | 185 | return | 165 | return |
2592 | 166 | |||
2593 | 186 | with open(keyfile, 'w') as fd: | 167 | with open(keyfile, 'w') as fd: |
2594 | 187 | fd.write(key) | 168 | fd.write(key) |
2596 | 188 | log('ceph: Created new keyfile at %s.' % keyfile, level=INFO) | 169 | |
2597 | 170 | log('Created new keyfile at %s.' % keyfile, level=INFO) | ||
2598 | 189 | 171 | ||
2599 | 190 | 172 | ||
2600 | 191 | def get_ceph_nodes(): | 173 | def get_ceph_nodes(): |
2602 | 192 | ''' Query named relation 'ceph' to detemine current nodes ''' | 174 | """Query named relation 'ceph' to determine current nodes.""" |
2603 | 193 | hosts = [] | 175 | hosts = [] |
2604 | 194 | for r_id in relation_ids('ceph'): | 176 | for r_id in relation_ids('ceph'): |
2605 | 195 | for unit in related_units(r_id): | 177 | for unit in related_units(r_id): |
2606 | 196 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) | 178 | hosts.append(relation_get('private-address', unit=unit, rid=r_id)) |
2607 | 179 | |||
2608 | 197 | return hosts | 180 | return hosts |
2609 | 198 | 181 | ||
2610 | 199 | 182 | ||
2611 | 200 | def configure(service, key, auth, use_syslog): | 183 | def configure(service, key, auth, use_syslog): |
2613 | 201 | ''' Perform basic configuration of Ceph ''' | 184 | """Perform basic configuration of Ceph.""" |
2614 | 202 | create_keyring(service, key) | 185 | create_keyring(service, key) |
2615 | 203 | create_key_file(service, key) | 186 | create_key_file(service, key) |
2616 | 204 | hosts = get_ceph_nodes() | 187 | hosts = get_ceph_nodes() |
2617 | @@ -211,17 +194,17 @@ | |||
2618 | 211 | 194 | ||
2619 | 212 | 195 | ||
2620 | 213 | def image_mapped(name): | 196 | def image_mapped(name): |
2622 | 214 | ''' Determine whether a RADOS block device is mapped locally ''' | 197 | """Determine whether a RADOS block device is mapped locally.""" |
2623 | 215 | try: | 198 | try: |
2625 | 216 | out = check_output(['rbd', 'showmapped']) | 199 | out = check_output(['rbd', 'showmapped']).decode('UTF-8') |
2626 | 217 | except CalledProcessError: | 200 | except CalledProcessError: |
2627 | 218 | return False | 201 | return False |
2630 | 219 | else: | 202 | |
2631 | 220 | return name in out | 203 | return name in out |
2632 | 221 | 204 | ||
2633 | 222 | 205 | ||
2634 | 223 | def map_block_storage(service, pool, image): | 206 | def map_block_storage(service, pool, image): |
2636 | 224 | ''' Map a RADOS block device for local use ''' | 207 | """Map a RADOS block device for local use.""" |
2637 | 225 | cmd = [ | 208 | cmd = [ |
2638 | 226 | 'rbd', | 209 | 'rbd', |
2639 | 227 | 'map', | 210 | 'map', |
2640 | @@ -235,31 +218,32 @@ | |||
2641 | 235 | 218 | ||
2642 | 236 | 219 | ||
2643 | 237 | def filesystem_mounted(fs): | 220 | def filesystem_mounted(fs): |
2645 | 238 | ''' Determine whether a filesytems is already mounted ''' | 221 | """Determine whether a filesytems is already mounted.""" |
2646 | 239 | return fs in [f for f, m in mounts()] | 222 | return fs in [f for f, m in mounts()] |
2647 | 240 | 223 | ||
2648 | 241 | 224 | ||
2649 | 242 | def make_filesystem(blk_device, fstype='ext4', timeout=10): | 225 | def make_filesystem(blk_device, fstype='ext4', timeout=10): |
2651 | 243 | ''' Make a new filesystem on the specified block device ''' | 226 | """Make a new filesystem on the specified block device.""" |
2652 | 244 | count = 0 | 227 | count = 0 |
2653 | 245 | e_noent = os.errno.ENOENT | 228 | e_noent = os.errno.ENOENT |
2654 | 246 | while not os.path.exists(blk_device): | 229 | while not os.path.exists(blk_device): |
2655 | 247 | if count >= timeout: | 230 | if count >= timeout: |
2657 | 248 | log('ceph: gave up waiting on block device %s' % blk_device, | 231 | log('Gave up waiting on block device %s' % blk_device, |
2658 | 249 | level=ERROR) | 232 | level=ERROR) |
2659 | 250 | raise IOError(e_noent, os.strerror(e_noent), blk_device) | 233 | raise IOError(e_noent, os.strerror(e_noent), blk_device) |
2662 | 251 | log('ceph: waiting for block device %s to appear' % blk_device, | 234 | |
2663 | 252 | level=INFO) | 235 | log('Waiting for block device %s to appear' % blk_device, |
2664 | 236 | level=DEBUG) | ||
2665 | 253 | count += 1 | 237 | count += 1 |
2666 | 254 | time.sleep(1) | 238 | time.sleep(1) |
2667 | 255 | else: | 239 | else: |
2669 | 256 | log('ceph: Formatting block device %s as filesystem %s.' % | 240 | log('Formatting block device %s as filesystem %s.' % |
2670 | 257 | (blk_device, fstype), level=INFO) | 241 | (blk_device, fstype), level=INFO) |
2671 | 258 | check_call(['mkfs', '-t', fstype, blk_device]) | 242 | check_call(['mkfs', '-t', fstype, blk_device]) |
2672 | 259 | 243 | ||
2673 | 260 | 244 | ||
2674 | 261 | def place_data_on_block_device(blk_device, data_src_dst): | 245 | def place_data_on_block_device(blk_device, data_src_dst): |
2676 | 262 | ''' Migrate data in data_src_dst to blk_device and then remount ''' | 246 | """Migrate data in data_src_dst to blk_device and then remount.""" |
2677 | 263 | # mount block device into /mnt | 247 | # mount block device into /mnt |
2678 | 264 | mount(blk_device, '/mnt') | 248 | mount(blk_device, '/mnt') |
2679 | 265 | # copy data to /mnt | 249 | # copy data to /mnt |
2680 | @@ -279,8 +263,8 @@ | |||
2681 | 279 | 263 | ||
2682 | 280 | # TODO: re-use | 264 | # TODO: re-use |
2683 | 281 | def modprobe(module): | 265 | def modprobe(module): |
2686 | 282 | ''' Load a kernel module and configure for auto-load on reboot ''' | 266 | """Load a kernel module and configure for auto-load on reboot.""" |
2687 | 283 | log('ceph: Loading kernel module', level=INFO) | 267 | log('Loading kernel module', level=INFO) |
2688 | 284 | cmd = ['modprobe', module] | 268 | cmd = ['modprobe', module] |
2689 | 285 | check_call(cmd) | 269 | check_call(cmd) |
2690 | 286 | with open('/etc/modules', 'r+') as modules: | 270 | with open('/etc/modules', 'r+') as modules: |
2691 | @@ -289,7 +273,7 @@ | |||
2692 | 289 | 273 | ||
2693 | 290 | 274 | ||
2694 | 291 | def copy_files(src, dst, symlinks=False, ignore=None): | 275 | def copy_files(src, dst, symlinks=False, ignore=None): |
2696 | 292 | ''' Copy files from src to dst ''' | 276 | """Copy files from src to dst.""" |
2697 | 293 | for item in os.listdir(src): | 277 | for item in os.listdir(src): |
2698 | 294 | s = os.path.join(src, item) | 278 | s = os.path.join(src, item) |
2699 | 295 | d = os.path.join(dst, item) | 279 | d = os.path.join(dst, item) |
2700 | @@ -300,9 +284,9 @@ | |||
2701 | 300 | 284 | ||
2702 | 301 | 285 | ||
2703 | 302 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | 286 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, |
2707 | 303 | blk_device, fstype, system_services=[]): | 287 | blk_device, fstype, system_services=[], |
2708 | 304 | """ | 288 | replicas=3): |
2709 | 305 | NOTE: This function must only be called from a single service unit for | 289 | """NOTE: This function must only be called from a single service unit for |
2710 | 306 | the same rbd_img otherwise data loss will occur. | 290 | the same rbd_img otherwise data loss will occur. |
2711 | 307 | 291 | ||
2712 | 308 | Ensures given pool and RBD image exists, is mapped to a block device, | 292 | Ensures given pool and RBD image exists, is mapped to a block device, |
2713 | @@ -316,15 +300,16 @@ | |||
2714 | 316 | """ | 300 | """ |
2715 | 317 | # Ensure pool, RBD image, RBD mappings are in place. | 301 | # Ensure pool, RBD image, RBD mappings are in place. |
2716 | 318 | if not pool_exists(service, pool): | 302 | if not pool_exists(service, pool): |
2719 | 319 | log('ceph: Creating new pool {}.'.format(pool)) | 303 | log('Creating new pool {}.'.format(pool), level=INFO) |
2720 | 320 | create_pool(service, pool) | 304 | create_pool(service, pool, replicas=replicas) |
2721 | 321 | 305 | ||
2722 | 322 | if not rbd_exists(service, pool, rbd_img): | 306 | if not rbd_exists(service, pool, rbd_img): |
2724 | 323 | log('ceph: Creating RBD image ({}).'.format(rbd_img)) | 307 | log('Creating RBD image ({}).'.format(rbd_img), level=INFO) |
2725 | 324 | create_rbd_image(service, pool, rbd_img, sizemb) | 308 | create_rbd_image(service, pool, rbd_img, sizemb) |
2726 | 325 | 309 | ||
2727 | 326 | if not image_mapped(rbd_img): | 310 | if not image_mapped(rbd_img): |
2729 | 327 | log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img)) | 311 | log('Mapping RBD Image {} as a Block Device.'.format(rbd_img), |
2730 | 312 | level=INFO) | ||
2731 | 328 | map_block_storage(service, pool, rbd_img) | 313 | map_block_storage(service, pool, rbd_img) |
2732 | 329 | 314 | ||
2733 | 330 | # make file system | 315 | # make file system |
2734 | @@ -339,45 +324,47 @@ | |||
2735 | 339 | 324 | ||
2736 | 340 | for svc in system_services: | 325 | for svc in system_services: |
2737 | 341 | if service_running(svc): | 326 | if service_running(svc): |
2740 | 342 | log('ceph: Stopping services {} prior to migrating data.' | 327 | log('Stopping services {} prior to migrating data.' |
2741 | 343 | .format(svc)) | 328 | .format(svc), level=DEBUG) |
2742 | 344 | service_stop(svc) | 329 | service_stop(svc) |
2743 | 345 | 330 | ||
2744 | 346 | place_data_on_block_device(blk_device, mount_point) | 331 | place_data_on_block_device(blk_device, mount_point) |
2745 | 347 | 332 | ||
2746 | 348 | for svc in system_services: | 333 | for svc in system_services: |
2749 | 349 | log('ceph: Starting service {} after migrating data.' | 334 | log('Starting service {} after migrating data.' |
2750 | 350 | .format(svc)) | 335 | .format(svc), level=DEBUG) |
2751 | 351 | service_start(svc) | 336 | service_start(svc) |
2752 | 352 | 337 | ||
2753 | 353 | 338 | ||
2754 | 354 | def ensure_ceph_keyring(service, user=None, group=None): | 339 | def ensure_ceph_keyring(service, user=None, group=None): |
2758 | 355 | ''' | 340 | """Ensures a ceph keyring is created for a named service and optionally |
2759 | 356 | Ensures a ceph keyring is created for a named service | 341 | ensures user and group ownership. |
2757 | 357 | and optionally ensures user and group ownership. | ||
2760 | 358 | 342 | ||
2761 | 359 | Returns False if no ceph key is available in relation state. | 343 | Returns False if no ceph key is available in relation state. |
2763 | 360 | ''' | 344 | """ |
2764 | 361 | key = None | 345 | key = None |
2765 | 362 | for rid in relation_ids('ceph'): | 346 | for rid in relation_ids('ceph'): |
2766 | 363 | for unit in related_units(rid): | 347 | for unit in related_units(rid): |
2767 | 364 | key = relation_get('key', rid=rid, unit=unit) | 348 | key = relation_get('key', rid=rid, unit=unit) |
2768 | 365 | if key: | 349 | if key: |
2769 | 366 | break | 350 | break |
2770 | 351 | |||
2771 | 367 | if not key: | 352 | if not key: |
2772 | 368 | return False | 353 | return False |
2773 | 354 | |||
2774 | 369 | create_keyring(service=service, key=key) | 355 | create_keyring(service=service, key=key) |
2775 | 370 | keyring = _keyring_path(service) | 356 | keyring = _keyring_path(service) |
2776 | 371 | if user and group: | 357 | if user and group: |
2777 | 372 | check_call(['chown', '%s.%s' % (user, group), keyring]) | 358 | check_call(['chown', '%s.%s' % (user, group), keyring]) |
2778 | 359 | |||
2779 | 373 | return True | 360 | return True |
2780 | 374 | 361 | ||
2781 | 375 | 362 | ||
2782 | 376 | def ceph_version(): | 363 | def ceph_version(): |
2784 | 377 | ''' Retrieve the local version of ceph ''' | 364 | """Retrieve the local version of ceph.""" |
2785 | 378 | if os.path.exists('/usr/bin/ceph'): | 365 | if os.path.exists('/usr/bin/ceph'): |
2786 | 379 | cmd = ['ceph', '-v'] | 366 | cmd = ['ceph', '-v'] |
2788 | 380 | output = check_output(cmd) | 367 | output = check_output(cmd).decode('US-ASCII') |
2789 | 381 | output = output.split() | 368 | output = output.split() |
2790 | 382 | if len(output) > 3: | 369 | if len(output) > 3: |
2791 | 383 | return output[2] | 370 | return output[2] |
2792 | 384 | 371 | ||
2793 | === modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py' | |||
2794 | --- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-08-12 21:48:24 +0000 | |||
2795 | +++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-10 07:57:54 +0000 | |||
2796 | @@ -1,12 +1,12 @@ | |||
2797 | 1 | |||
2798 | 2 | import os | 1 | import os |
2799 | 3 | import re | 2 | import re |
2800 | 4 | |||
2801 | 5 | from subprocess import ( | 3 | from subprocess import ( |
2802 | 6 | check_call, | 4 | check_call, |
2803 | 7 | check_output, | 5 | check_output, |
2804 | 8 | ) | 6 | ) |
2805 | 9 | 7 | ||
2806 | 8 | import six | ||
2807 | 9 | |||
2808 | 10 | 10 | ||
2809 | 11 | ################################################## | 11 | ################################################## |
2810 | 12 | # loopback device helpers. | 12 | # loopback device helpers. |
2811 | @@ -37,7 +37,7 @@ | |||
2812 | 37 | ''' | 37 | ''' |
2813 | 38 | file_path = os.path.abspath(file_path) | 38 | file_path = os.path.abspath(file_path) |
2814 | 39 | check_call(['losetup', '--find', file_path]) | 39 | check_call(['losetup', '--find', file_path]) |
2816 | 40 | for d, f in loopback_devices().iteritems(): | 40 | for d, f in six.iteritems(loopback_devices()): |
2817 | 41 | if f == file_path: | 41 | if f == file_path: |
2818 | 42 | return d | 42 | return d |
2819 | 43 | 43 | ||
2820 | @@ -51,7 +51,7 @@ | |||
2821 | 51 | 51 | ||
2822 | 52 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) | 52 | :returns: str: Full path to the ensured loopback device (eg, /dev/loop0) |
2823 | 53 | ''' | 53 | ''' |
2825 | 54 | for d, f in loopback_devices().iteritems(): | 54 | for d, f in six.iteritems(loopback_devices()): |
2826 | 55 | if f == path: | 55 | if f == path: |
2827 | 56 | return d | 56 | return d |
2828 | 57 | 57 | ||
2829 | 58 | 58 | ||
2830 | === modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py' | |||
2831 | --- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:41:02 +0000 | |||
2832 | +++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-10 07:57:54 +0000 | |||
2833 | @@ -61,6 +61,7 @@ | |||
2834 | 61 | vg = None | 61 | vg = None |
2835 | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() | 62 | pvd = check_output(['pvdisplay', block_device]).splitlines() |
2836 | 63 | for l in pvd: | 63 | for l in pvd: |
2837 | 64 | l = l.decode('UTF-8') | ||
2838 | 64 | if l.strip().startswith('VG Name'): | 65 | if l.strip().startswith('VG Name'): |
2839 | 65 | vg = ' '.join(l.strip().split()[2:]) | 66 | vg = ' '.join(l.strip().split()[2:]) |
2840 | 66 | return vg | 67 | return vg |
2841 | 67 | 68 | ||
2842 | === modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
2843 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:14 +0000 | |||
2844 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-10 07:57:54 +0000 | |||
2845 | @@ -30,7 +30,8 @@ | |||
2846 | 30 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up | 30 | # sometimes sgdisk exits non-zero; this is OK, dd will clean up |
2847 | 31 | call(['sgdisk', '--zap-all', '--mbrtogpt', | 31 | call(['sgdisk', '--zap-all', '--mbrtogpt', |
2848 | 32 | '--clear', block_device]) | 32 | '--clear', block_device]) |
2850 | 33 | dev_end = check_output(['blockdev', '--getsz', block_device]) | 33 | dev_end = check_output(['blockdev', '--getsz', |
2851 | 34 | block_device]).decode('UTF-8') | ||
2852 | 34 | gpt_end = int(dev_end.split()[0]) - 100 | 35 | gpt_end = int(dev_end.split()[0]) - 100 |
2853 | 35 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), | 36 | check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device), |
2854 | 36 | 'bs=1M', 'count=1']) | 37 | 'bs=1M', 'count=1']) |
2855 | @@ -47,7 +48,7 @@ | |||
2856 | 47 | it doesn't. | 48 | it doesn't. |
2857 | 48 | ''' | 49 | ''' |
2858 | 49 | is_partition = bool(re.search(r".*[0-9]+\b", device)) | 50 | is_partition = bool(re.search(r".*[0-9]+\b", device)) |
2860 | 50 | out = check_output(['mount']) | 51 | out = check_output(['mount']).decode('UTF-8') |
2861 | 51 | if is_partition: | 52 | if is_partition: |
2862 | 52 | return bool(re.search(device + r"\b", out)) | 53 | return bool(re.search(device + r"\b", out)) |
2863 | 53 | return bool(re.search(device + r"[0-9]+\b", out)) | 54 | return bool(re.search(device + r"[0-9]+\b", out)) |
2864 | 54 | 55 | ||
2865 | === modified file 'hooks/charmhelpers/core/fstab.py' | |||
2866 | --- hooks/charmhelpers/core/fstab.py 2014-07-11 02:24:52 +0000 | |||
2867 | +++ hooks/charmhelpers/core/fstab.py 2014-12-10 07:57:54 +0000 | |||
2868 | @@ -3,10 +3,11 @@ | |||
2869 | 3 | 3 | ||
2870 | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' |
2871 | 5 | 5 | ||
2872 | 6 | import io | ||
2873 | 6 | import os | 7 | import os |
2874 | 7 | 8 | ||
2875 | 8 | 9 | ||
2877 | 9 | class Fstab(file): | 10 | class Fstab(io.FileIO): |
2878 | 10 | """This class extends file in order to implement a file reader/writer | 11 | """This class extends file in order to implement a file reader/writer |
2879 | 11 | for file `/etc/fstab` | 12 | for file `/etc/fstab` |
2880 | 12 | """ | 13 | """ |
2881 | @@ -24,8 +25,8 @@ | |||
2882 | 24 | options = "defaults" | 25 | options = "defaults" |
2883 | 25 | 26 | ||
2884 | 26 | self.options = options | 27 | self.options = options |
2887 | 27 | self.d = d | 28 | self.d = int(d) |
2888 | 28 | self.p = p | 29 | self.p = int(p) |
2889 | 29 | 30 | ||
2890 | 30 | def __eq__(self, o): | 31 | def __eq__(self, o): |
2891 | 31 | return str(self) == str(o) | 32 | return str(self) == str(o) |
2892 | @@ -45,7 +46,7 @@ | |||
2893 | 45 | self._path = path | 46 | self._path = path |
2894 | 46 | else: | 47 | else: |
2895 | 47 | self._path = self.DEFAULT_PATH | 48 | self._path = self.DEFAULT_PATH |
2897 | 48 | file.__init__(self, self._path, 'r+') | 49 | super(Fstab, self).__init__(self._path, 'rb+') |
2898 | 49 | 50 | ||
2899 | 50 | def _hydrate_entry(self, line): | 51 | def _hydrate_entry(self, line): |
2900 | 51 | # NOTE: use split with no arguments to split on any | 52 | # NOTE: use split with no arguments to split on any |
2901 | @@ -58,8 +59,9 @@ | |||
2902 | 58 | def entries(self): | 59 | def entries(self): |
2903 | 59 | self.seek(0) | 60 | self.seek(0) |
2904 | 60 | for line in self.readlines(): | 61 | for line in self.readlines(): |
2905 | 62 | line = line.decode('us-ascii') | ||
2906 | 61 | try: | 63 | try: |
2908 | 62 | if not line.startswith("#"): | 64 | if line.strip() and not line.startswith("#"): |
2909 | 63 | yield self._hydrate_entry(line) | 65 | yield self._hydrate_entry(line) |
2910 | 64 | except ValueError: | 66 | except ValueError: |
2911 | 65 | pass | 67 | pass |
2912 | @@ -75,14 +77,14 @@ | |||
2913 | 75 | if self.get_entry_by_attr('device', entry.device): | 77 | if self.get_entry_by_attr('device', entry.device): |
2914 | 76 | return False | 78 | return False |
2915 | 77 | 79 | ||
2917 | 78 | self.write(str(entry) + '\n') | 80 | self.write((str(entry) + '\n').encode('us-ascii')) |
2918 | 79 | self.truncate() | 81 | self.truncate() |
2919 | 80 | return entry | 82 | return entry |
2920 | 81 | 83 | ||
2921 | 82 | def remove_entry(self, entry): | 84 | def remove_entry(self, entry): |
2922 | 83 | self.seek(0) | 85 | self.seek(0) |
2923 | 84 | 86 | ||
2925 | 85 | lines = self.readlines() | 87 | lines = [l.decode('us-ascii') for l in self.readlines()] |
2926 | 86 | 88 | ||
2927 | 87 | found = False | 89 | found = False |
2928 | 88 | for index, line in enumerate(lines): | 90 | for index, line in enumerate(lines): |
2929 | @@ -97,7 +99,7 @@ | |||
2930 | 97 | lines.remove(line) | 99 | lines.remove(line) |
2931 | 98 | 100 | ||
2932 | 99 | self.seek(0) | 101 | self.seek(0) |
2934 | 100 | self.write(''.join(lines)) | 102 | self.write(''.join(lines).encode('us-ascii')) |
2935 | 101 | self.truncate() | 103 | self.truncate() |
2936 | 102 | return True | 104 | return True |
2937 | 103 | 105 | ||
2938 | 104 | 106 | ||
2939 | === modified file 'hooks/charmhelpers/core/hookenv.py' | |||
2940 | --- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:57:43 +0000 | |||
2941 | +++ hooks/charmhelpers/core/hookenv.py 2014-12-10 07:57:54 +0000 | |||
2942 | @@ -9,9 +9,14 @@ | |||
2943 | 9 | import yaml | 9 | import yaml |
2944 | 10 | import subprocess | 10 | import subprocess |
2945 | 11 | import sys | 11 | import sys |
2946 | 12 | import UserDict | ||
2947 | 13 | from subprocess import CalledProcessError | 12 | from subprocess import CalledProcessError |
2948 | 14 | 13 | ||
2949 | 14 | import six | ||
2950 | 15 | if not six.PY3: | ||
2951 | 16 | from UserDict import UserDict | ||
2952 | 17 | else: | ||
2953 | 18 | from collections import UserDict | ||
2954 | 19 | |||
2955 | 15 | CRITICAL = "CRITICAL" | 20 | CRITICAL = "CRITICAL" |
2956 | 16 | ERROR = "ERROR" | 21 | ERROR = "ERROR" |
2957 | 17 | WARNING = "WARNING" | 22 | WARNING = "WARNING" |
2958 | @@ -63,16 +68,18 @@ | |||
2959 | 63 | command = ['juju-log'] | 68 | command = ['juju-log'] |
2960 | 64 | if level: | 69 | if level: |
2961 | 65 | command += ['-l', level] | 70 | command += ['-l', level] |
2962 | 71 | if not isinstance(message, six.string_types): | ||
2963 | 72 | message = repr(message) | ||
2964 | 66 | command += [message] | 73 | command += [message] |
2965 | 67 | subprocess.call(command) | 74 | subprocess.call(command) |
2966 | 68 | 75 | ||
2967 | 69 | 76 | ||
2969 | 70 | class Serializable(UserDict.IterableUserDict): | 77 | class Serializable(UserDict): |
2970 | 71 | """Wrapper, an object that can be serialized to yaml or json""" | 78 | """Wrapper, an object that can be serialized to yaml or json""" |
2971 | 72 | 79 | ||
2972 | 73 | def __init__(self, obj): | 80 | def __init__(self, obj): |
2973 | 74 | # wrap the object | 81 | # wrap the object |
2975 | 75 | UserDict.IterableUserDict.__init__(self) | 82 | UserDict.__init__(self) |
2976 | 76 | self.data = obj | 83 | self.data = obj |
2977 | 77 | 84 | ||
2978 | 78 | def __getattr__(self, attr): | 85 | def __getattr__(self, attr): |
2979 | @@ -214,6 +221,12 @@ | |||
2980 | 214 | except KeyError: | 221 | except KeyError: |
2981 | 215 | return (self._prev_dict or {})[key] | 222 | return (self._prev_dict or {})[key] |
2982 | 216 | 223 | ||
2983 | 224 | def keys(self): | ||
2984 | 225 | prev_keys = [] | ||
2985 | 226 | if self._prev_dict is not None: | ||
2986 | 227 | prev_keys = self._prev_dict.keys() | ||
2987 | 228 | return list(set(prev_keys + list(dict.keys(self)))) | ||
2988 | 229 | |||
2989 | 217 | def load_previous(self, path=None): | 230 | def load_previous(self, path=None): |
2990 | 218 | """Load previous copy of config from disk. | 231 | """Load previous copy of config from disk. |
2991 | 219 | 232 | ||
2992 | @@ -263,7 +276,7 @@ | |||
2993 | 263 | 276 | ||
2994 | 264 | """ | 277 | """ |
2995 | 265 | if self._prev_dict: | 278 | if self._prev_dict: |
2997 | 266 | for k, v in self._prev_dict.iteritems(): | 279 | for k, v in six.iteritems(self._prev_dict): |
2998 | 267 | if k not in self: | 280 | if k not in self: |
2999 | 268 | self[k] = v | 281 | self[k] = v |
3000 | 269 | with open(self.path, 'w') as f: | 282 | with open(self.path, 'w') as f: |
3001 | @@ -278,7 +291,8 @@ | |||
3002 | 278 | config_cmd_line.append(scope) | 291 | config_cmd_line.append(scope) |
3003 | 279 | config_cmd_line.append('--format=json') | 292 | config_cmd_line.append('--format=json') |
3004 | 280 | try: | 293 | try: |
3006 | 281 | config_data = json.loads(subprocess.check_output(config_cmd_line)) | 294 | config_data = json.loads( |
3007 | 295 | subprocess.check_output(config_cmd_line).decode('UTF-8')) | ||
3008 | 282 | if scope is not None: | 296 | if scope is not None: |
3009 | 283 | return config_data | 297 | return config_data |
3010 | 284 | return Config(config_data) | 298 | return Config(config_data) |
3011 | @@ -297,10 +311,10 @@ | |||
3012 | 297 | if unit: | 311 | if unit: |
3013 | 298 | _args.append(unit) | 312 | _args.append(unit) |
3014 | 299 | try: | 313 | try: |
3016 | 300 | return json.loads(subprocess.check_output(_args)) | 314 | return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
3017 | 301 | except ValueError: | 315 | except ValueError: |
3018 | 302 | return None | 316 | return None |
3020 | 303 | except CalledProcessError, e: | 317 | except CalledProcessError as e: |
3021 | 304 | if e.returncode == 2: | 318 | if e.returncode == 2: |
3022 | 305 | return None | 319 | return None |
3023 | 306 | raise | 320 | raise |
3024 | @@ -312,7 +326,7 @@ | |||
3025 | 312 | relation_cmd_line = ['relation-set'] | 326 | relation_cmd_line = ['relation-set'] |
3026 | 313 | if relation_id is not None: | 327 | if relation_id is not None: |
3027 | 314 | relation_cmd_line.extend(('-r', relation_id)) | 328 | relation_cmd_line.extend(('-r', relation_id)) |
3029 | 315 | for k, v in (relation_settings.items() + kwargs.items()): | 329 | for k, v in (list(relation_settings.items()) + list(kwargs.items())): |
3030 | 316 | if v is None: | 330 | if v is None: |
3031 | 317 | relation_cmd_line.append('{}='.format(k)) | 331 | relation_cmd_line.append('{}='.format(k)) |
3032 | 318 | else: | 332 | else: |
3033 | @@ -329,7 +343,8 @@ | |||
3034 | 329 | relid_cmd_line = ['relation-ids', '--format=json'] | 343 | relid_cmd_line = ['relation-ids', '--format=json'] |
3035 | 330 | if reltype is not None: | 344 | if reltype is not None: |
3036 | 331 | relid_cmd_line.append(reltype) | 345 | relid_cmd_line.append(reltype) |
3038 | 332 | return json.loads(subprocess.check_output(relid_cmd_line)) or [] | 346 | return json.loads( |
3039 | 347 | subprocess.check_output(relid_cmd_line).decode('UTF-8')) or [] | ||
3040 | 333 | return [] | 348 | return [] |
3041 | 334 | 349 | ||
3042 | 335 | 350 | ||
3043 | @@ -340,7 +355,8 @@ | |||
3044 | 340 | units_cmd_line = ['relation-list', '--format=json'] | 355 | units_cmd_line = ['relation-list', '--format=json'] |
3045 | 341 | if relid is not None: | 356 | if relid is not None: |
3046 | 342 | units_cmd_line.extend(('-r', relid)) | 357 | units_cmd_line.extend(('-r', relid)) |
3048 | 343 | return json.loads(subprocess.check_output(units_cmd_line)) or [] | 358 | return json.loads( |
3049 | 359 | subprocess.check_output(units_cmd_line).decode('UTF-8')) or [] | ||
3050 | 344 | 360 | ||
3051 | 345 | 361 | ||
3052 | 346 | @cached | 362 | @cached |
3053 | @@ -449,7 +465,7 @@ | |||
3054 | 449 | """Get the unit ID for the remote unit""" | 465 | """Get the unit ID for the remote unit""" |
3055 | 450 | _args = ['unit-get', '--format=json', attribute] | 466 | _args = ['unit-get', '--format=json', attribute] |
3056 | 451 | try: | 467 | try: |
3058 | 452 | return json.loads(subprocess.check_output(_args)) | 468 | return json.loads(subprocess.check_output(_args).decode('UTF-8')) |
3059 | 453 | except ValueError: | 469 | except ValueError: |
3060 | 454 | return None | 470 | return None |
3061 | 455 | 471 | ||
3062 | 456 | 472 | ||
3063 | === modified file 'hooks/charmhelpers/core/host.py' | |||
3064 | --- hooks/charmhelpers/core/host.py 2014-10-06 21:57:43 +0000 | |||
3065 | +++ hooks/charmhelpers/core/host.py 2014-12-10 07:57:54 +0000 | |||
3066 | @@ -6,19 +6,20 @@ | |||
3067 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> |
3068 | 7 | 7 | ||
3069 | 8 | import os | 8 | import os |
3070 | 9 | import re | ||
3071 | 9 | import pwd | 10 | import pwd |
3072 | 10 | import grp | 11 | import grp |
3073 | 11 | import random | 12 | import random |
3074 | 12 | import string | 13 | import string |
3075 | 13 | import subprocess | 14 | import subprocess |
3076 | 14 | import hashlib | 15 | import hashlib |
3077 | 15 | import shutil | ||
3078 | 16 | from contextlib import contextmanager | 16 | from contextlib import contextmanager |
3079 | 17 | |||
3080 | 18 | from collections import OrderedDict | 17 | from collections import OrderedDict |
3081 | 19 | 18 | ||
3084 | 20 | from hookenv import log | 19 | import six |
3085 | 21 | from fstab import Fstab | 20 | |
3086 | 21 | from .hookenv import log | ||
3087 | 22 | from .fstab import Fstab | ||
3088 | 22 | 23 | ||
3089 | 23 | 24 | ||
3090 | 24 | def service_start(service_name): | 25 | def service_start(service_name): |
3091 | @@ -54,7 +55,9 @@ | |||
3092 | 54 | def service_running(service): | 55 | def service_running(service): |
3093 | 55 | """Determine whether a system service is running""" | 56 | """Determine whether a system service is running""" |
3094 | 56 | try: | 57 | try: |
3096 | 57 | output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) | 58 | output = subprocess.check_output( |
3097 | 59 | ['service', service, 'status'], | ||
3098 | 60 | stderr=subprocess.STDOUT).decode('UTF-8') | ||
3099 | 58 | except subprocess.CalledProcessError: | 61 | except subprocess.CalledProcessError: |
3100 | 59 | return False | 62 | return False |
3101 | 60 | else: | 63 | else: |
3102 | @@ -67,7 +70,9 @@ | |||
3103 | 67 | def service_available(service_name): | 70 | def service_available(service_name): |
3104 | 68 | """Determine whether a system service is available""" | 71 | """Determine whether a system service is available""" |
3105 | 69 | try: | 72 | try: |
3107 | 70 | subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) | 73 | subprocess.check_output( |
3108 | 74 | ['service', service_name, 'status'], | ||
3109 | 75 | stderr=subprocess.STDOUT).decode('UTF-8') | ||
3110 | 71 | except subprocess.CalledProcessError as e: | 76 | except subprocess.CalledProcessError as e: |
3111 | 72 | return 'unrecognized service' not in e.output | 77 | return 'unrecognized service' not in e.output |
3112 | 73 | else: | 78 | else: |
3113 | @@ -96,6 +101,26 @@ | |||
3114 | 96 | return user_info | 101 | return user_info |
3115 | 97 | 102 | ||
3116 | 98 | 103 | ||
3117 | 104 | def add_group(group_name, system_group=False): | ||
3118 | 105 | """Add a group to the system""" | ||
3119 | 106 | try: | ||
3120 | 107 | group_info = grp.getgrnam(group_name) | ||
3121 | 108 | log('group {0} already exists!'.format(group_name)) | ||
3122 | 109 | except KeyError: | ||
3123 | 110 | log('creating group {0}'.format(group_name)) | ||
3124 | 111 | cmd = ['addgroup'] | ||
3125 | 112 | if system_group: | ||
3126 | 113 | cmd.append('--system') | ||
3127 | 114 | else: | ||
3128 | 115 | cmd.extend([ | ||
3129 | 116 | '--group', | ||
3130 | 117 | ]) | ||
3131 | 118 | cmd.append(group_name) | ||
3132 | 119 | subprocess.check_call(cmd) | ||
3133 | 120 | group_info = grp.getgrnam(group_name) | ||
3134 | 121 | return group_info | ||
3135 | 122 | |||
3136 | 123 | |||
3137 | 99 | def add_user_to_group(username, group): | 124 | def add_user_to_group(username, group): |
3138 | 100 | """Add a user to a group""" | 125 | """Add a user to a group""" |
3139 | 101 | cmd = [ | 126 | cmd = [ |
3140 | @@ -115,7 +140,7 @@ | |||
3141 | 115 | cmd.append(from_path) | 140 | cmd.append(from_path) |
3142 | 116 | cmd.append(to_path) | 141 | cmd.append(to_path) |
3143 | 117 | log(" ".join(cmd)) | 142 | log(" ".join(cmd)) |
3145 | 118 | return subprocess.check_output(cmd).strip() | 143 | return subprocess.check_output(cmd).decode('UTF-8').strip() |
3146 | 119 | 144 | ||
3147 | 120 | 145 | ||
3148 | 121 | def symlink(source, destination): | 146 | def symlink(source, destination): |
3149 | @@ -130,7 +155,7 @@ | |||
3150 | 130 | subprocess.check_call(cmd) | 155 | subprocess.check_call(cmd) |
3151 | 131 | 156 | ||
3152 | 132 | 157 | ||
3154 | 133 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | 158 | def mkdir(path, owner='root', group='root', perms=0o555, force=False): |
3155 | 134 | """Create a directory""" | 159 | """Create a directory""" |
3156 | 135 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | 160 | log("Making dir {} {}:{} {:o}".format(path, owner, group, |
3157 | 136 | perms)) | 161 | perms)) |
3158 | @@ -146,7 +171,7 @@ | |||
3159 | 146 | os.chown(realpath, uid, gid) | 171 | os.chown(realpath, uid, gid) |
3160 | 147 | 172 | ||
3161 | 148 | 173 | ||
3163 | 149 | def write_file(path, content, owner='root', group='root', perms=0444): | 174 | def write_file(path, content, owner='root', group='root', perms=0o444): |
3164 | 150 | """Create or overwrite a file with the contents of a string""" | 175 | """Create or overwrite a file with the contents of a string""" |
3165 | 151 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) | 176 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) |
3166 | 152 | uid = pwd.getpwnam(owner).pw_uid | 177 | uid = pwd.getpwnam(owner).pw_uid |
3167 | @@ -177,7 +202,7 @@ | |||
3168 | 177 | cmd_args.extend([device, mountpoint]) | 202 | cmd_args.extend([device, mountpoint]) |
3169 | 178 | try: | 203 | try: |
3170 | 179 | subprocess.check_output(cmd_args) | 204 | subprocess.check_output(cmd_args) |
3172 | 180 | except subprocess.CalledProcessError, e: | 205 | except subprocess.CalledProcessError as e: |
3173 | 181 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | 206 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) |
3174 | 182 | return False | 207 | return False |
3175 | 183 | 208 | ||
3176 | @@ -191,7 +216,7 @@ | |||
3177 | 191 | cmd_args = ['umount', mountpoint] | 216 | cmd_args = ['umount', mountpoint] |
3178 | 192 | try: | 217 | try: |
3179 | 193 | subprocess.check_output(cmd_args) | 218 | subprocess.check_output(cmd_args) |
3181 | 194 | except subprocess.CalledProcessError, e: | 219 | except subprocess.CalledProcessError as e: |
3182 | 195 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | 220 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) |
3183 | 196 | return False | 221 | return False |
3184 | 197 | 222 | ||
3185 | @@ -218,8 +243,8 @@ | |||
3186 | 218 | """ | 243 | """ |
3187 | 219 | if os.path.exists(path): | 244 | if os.path.exists(path): |
3188 | 220 | h = getattr(hashlib, hash_type)() | 245 | h = getattr(hashlib, hash_type)() |
3191 | 221 | with open(path, 'r') as source: | 246 | with open(path, 'rb') as source: |
3192 | 222 | h.update(source.read()) # IGNORE:E1101 - it does have update | 247 | h.update(source.read()) |
3193 | 223 | return h.hexdigest() | 248 | return h.hexdigest() |
3194 | 224 | else: | 249 | else: |
3195 | 225 | return None | 250 | return None |
3196 | @@ -297,7 +322,7 @@ | |||
3197 | 297 | if length is None: | 322 | if length is None: |
3198 | 298 | length = random.choice(range(35, 45)) | 323 | length = random.choice(range(35, 45)) |
3199 | 299 | alphanumeric_chars = [ | 324 | alphanumeric_chars = [ |
3201 | 300 | l for l in (string.letters + string.digits) | 325 | l for l in (string.ascii_letters + string.digits) |
3202 | 301 | if l not in 'l0QD1vAEIOUaeiou'] | 326 | if l not in 'l0QD1vAEIOUaeiou'] |
3203 | 302 | random_chars = [ | 327 | random_chars = [ |
3204 | 303 | random.choice(alphanumeric_chars) for _ in range(length)] | 328 | random.choice(alphanumeric_chars) for _ in range(length)] |
3205 | @@ -306,30 +331,65 @@ | |||
3206 | 306 | 331 | ||
3207 | 307 | def list_nics(nic_type): | 332 | def list_nics(nic_type): |
3208 | 308 | '''Return a list of nics of given type(s)''' | 333 | '''Return a list of nics of given type(s)''' |
3210 | 309 | if isinstance(nic_type, basestring): | 334 | if isinstance(nic_type, six.string_types): |
3211 | 310 | int_types = [nic_type] | 335 | int_types = [nic_type] |
3212 | 311 | else: | 336 | else: |
3213 | 312 | int_types = nic_type | 337 | int_types = nic_type |
3214 | 313 | interfaces = [] | 338 | interfaces = [] |
3215 | 314 | for int_type in int_types: | 339 | for int_type in int_types: |
3216 | 315 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] | 340 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] |
3218 | 316 | ip_output = subprocess.check_output(cmd).split('\n') | 341 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
3219 | 317 | ip_output = (line for line in ip_output if line) | 342 | ip_output = (line for line in ip_output if line) |
3220 | 318 | for line in ip_output: | 343 | for line in ip_output: |
3221 | 319 | if line.split()[1].startswith(int_type): | 344 | if line.split()[1].startswith(int_type): |
3223 | 320 | interfaces.append(line.split()[1].replace(":", "")) | 345 | matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line) |
3224 | 346 | if matched: | ||
3225 | 347 | interface = matched.groups()[0] | ||
3226 | 348 | else: | ||
3227 | 349 | interface = line.split()[1].replace(":", "") | ||
3228 | 350 | interfaces.append(interface) | ||
3229 | 351 | |||
3230 | 321 | return interfaces | 352 | return interfaces |
3231 | 322 | 353 | ||
3232 | 323 | 354 | ||
3234 | 324 | def set_nic_mtu(nic, mtu): | 355 | def set_nic_mtu(nic, mtu, persistence=False): |
3235 | 325 | '''Set MTU on a network interface''' | 356 | '''Set MTU on a network interface''' |
3236 | 326 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] | 357 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] |
3237 | 327 | subprocess.check_call(cmd) | 358 | subprocess.check_call(cmd) |
3238 | 359 | # persistence mtu configuration | ||
3239 | 360 | if not persistence: | ||
3240 | 361 | return | ||
3241 | 362 | if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic): | ||
3242 | 363 | nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic | ||
3243 | 364 | else: | ||
3244 | 365 | nic_cfg_file = "/etc/network/interfaces" | ||
3245 | 366 | f = open(nic_cfg_file,"r") | ||
3246 | 367 | lines = f.readlines() | ||
3247 | 368 | found = False | ||
3248 | 369 | length = len(lines) | ||
3249 | 370 | for i in range(len(lines)): | ||
3250 | 371 | lines[i] = lines[i].replace('\n', '') | ||
3251 | 372 | if lines[i].startswith("iface %s" % nic): | ||
3252 | 373 | found = True | ||
3253 | 374 | lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu) | ||
3254 | 375 | lines.insert(i+2, " down ip link set $IFACE mtu 1500") | ||
3255 | 376 | if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"): | ||
3256 | 377 | del lines[i+3] | ||
3257 | 378 | if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"): | ||
3258 | 379 | del lines[i+3] | ||
3259 | 380 | break | ||
3260 | 381 | if not found: | ||
3261 | 382 | lines.insert(length+1, "") | ||
3262 | 383 | lines.insert(length+2, "auto %s" % nic) | ||
3263 | 384 | lines.insert(length+3, "iface %s inet dhcp" % nic) | ||
3264 | 385 | lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu) | ||
3265 | 386 | lines.insert(length+5, " down ip link set $IFACE mtu 1500") | ||
3266 | 387 | write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644) | ||
3267 | 328 | 388 | ||
3268 | 329 | 389 | ||
3269 | 330 | def get_nic_mtu(nic): | 390 | def get_nic_mtu(nic): |
3270 | 331 | cmd = ['ip', 'addr', 'show', nic] | 391 | cmd = ['ip', 'addr', 'show', nic] |
3272 | 332 | ip_output = subprocess.check_output(cmd).split('\n') | 392 | ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') |
3273 | 333 | mtu = "" | 393 | mtu = "" |
3274 | 334 | for line in ip_output: | 394 | for line in ip_output: |
3275 | 335 | words = line.split() | 395 | words = line.split() |
3276 | @@ -340,7 +400,7 @@ | |||
3277 | 340 | 400 | ||
3278 | 341 | def get_nic_hwaddr(nic): | 401 | def get_nic_hwaddr(nic): |
3279 | 342 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] | 402 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] |
3281 | 343 | ip_output = subprocess.check_output(cmd) | 403 | ip_output = subprocess.check_output(cmd).decode('UTF-8') |
3282 | 344 | hwaddr = "" | 404 | hwaddr = "" |
3283 | 345 | words = ip_output.split() | 405 | words = ip_output.split() |
3284 | 346 | if 'link/ether' in words: | 406 | if 'link/ether' in words: |
3285 | @@ -357,8 +417,8 @@ | |||
3286 | 357 | 417 | ||
3287 | 358 | ''' | 418 | ''' |
3288 | 359 | import apt_pkg | 419 | import apt_pkg |
3289 | 360 | from charmhelpers.fetch import apt_cache | ||
3290 | 361 | if not pkgcache: | 420 | if not pkgcache: |
3291 | 421 | from charmhelpers.fetch import apt_cache | ||
3292 | 362 | pkgcache = apt_cache() | 422 | pkgcache = apt_cache() |
3293 | 363 | pkg = pkgcache[package] | 423 | pkg = pkgcache[package] |
3294 | 364 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) | 424 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
3295 | 365 | 425 | ||
3296 | === modified file 'hooks/charmhelpers/core/services/__init__.py' | |||
3297 | --- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:14 +0000 | |||
3298 | +++ hooks/charmhelpers/core/services/__init__.py 2014-12-10 07:57:54 +0000 | |||
3299 | @@ -1,2 +1,2 @@ | |||
3302 | 1 | from .base import * | 1 | from .base import * # NOQA |
3303 | 2 | from .helpers import * | 2 | from .helpers import * # NOQA |
3304 | 3 | 3 | ||
3305 | === modified file 'hooks/charmhelpers/core/services/helpers.py' | |||
3306 | --- hooks/charmhelpers/core/services/helpers.py 2014-10-06 21:57:43 +0000 | |||
3307 | +++ hooks/charmhelpers/core/services/helpers.py 2014-12-10 07:57:54 +0000 | |||
3308 | @@ -196,7 +196,7 @@ | |||
3309 | 196 | if not os.path.isabs(file_name): | 196 | if not os.path.isabs(file_name): |
3310 | 197 | file_name = os.path.join(hookenv.charm_dir(), file_name) | 197 | file_name = os.path.join(hookenv.charm_dir(), file_name) |
3311 | 198 | with open(file_name, 'w') as file_stream: | 198 | with open(file_name, 'w') as file_stream: |
3313 | 199 | os.fchmod(file_stream.fileno(), 0600) | 199 | os.fchmod(file_stream.fileno(), 0o600) |
3314 | 200 | yaml.dump(config_data, file_stream) | 200 | yaml.dump(config_data, file_stream) |
3315 | 201 | 201 | ||
3316 | 202 | def read_context(self, file_name): | 202 | def read_context(self, file_name): |
3317 | @@ -211,15 +211,19 @@ | |||
3318 | 211 | 211 | ||
3319 | 212 | class TemplateCallback(ManagerCallback): | 212 | class TemplateCallback(ManagerCallback): |
3320 | 213 | """ | 213 | """ |
3324 | 214 | Callback class that will render a Jinja2 template, for use as a ready action. | 214 | Callback class that will render a Jinja2 template, for use as a ready |
3325 | 215 | 215 | action. | |
3326 | 216 | :param str source: The template source file, relative to `$CHARM_DIR/templates` | 216 | |
3327 | 217 | :param str source: The template source file, relative to | ||
3328 | 218 | `$CHARM_DIR/templates` | ||
3329 | 219 | |||
3330 | 217 | :param str target: The target to write the rendered template to | 220 | :param str target: The target to write the rendered template to |
3331 | 218 | :param str owner: The owner of the rendered file | 221 | :param str owner: The owner of the rendered file |
3332 | 219 | :param str group: The group of the rendered file | 222 | :param str group: The group of the rendered file |
3333 | 220 | :param int perms: The permissions of the rendered file | 223 | :param int perms: The permissions of the rendered file |
3334 | 221 | """ | 224 | """ |
3336 | 222 | def __init__(self, source, target, owner='root', group='root', perms=0444): | 225 | def __init__(self, source, target, |
3337 | 226 | owner='root', group='root', perms=0o444): | ||
3338 | 223 | self.source = source | 227 | self.source = source |
3339 | 224 | self.target = target | 228 | self.target = target |
3340 | 225 | self.owner = owner | 229 | self.owner = owner |
3341 | 226 | 230 | ||
3342 | === added file 'hooks/charmhelpers/core/sysctl.py' | |||
3343 | --- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000 | |||
3344 | +++ hooks/charmhelpers/core/sysctl.py 2014-12-10 07:57:54 +0000 | |||
3345 | @@ -0,0 +1,34 @@ | |||
3346 | 1 | #!/usr/bin/env python | ||
3347 | 2 | # -*- coding: utf-8 -*- | ||
3348 | 3 | |||
3349 | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' | ||
3350 | 5 | |||
3351 | 6 | import yaml | ||
3352 | 7 | |||
3353 | 8 | from subprocess import check_call | ||
3354 | 9 | |||
3355 | 10 | from charmhelpers.core.hookenv import ( | ||
3356 | 11 | log, | ||
3357 | 12 | DEBUG, | ||
3358 | 13 | ) | ||
3359 | 14 | |||
3360 | 15 | |||
3361 | 16 | def create(sysctl_dict, sysctl_file): | ||
3362 | 17 | """Creates a sysctl.conf file from a YAML associative array | ||
3363 | 18 | |||
3364 | 19 | :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 } | ||
3365 | 20 | :type sysctl_dict: dict | ||
3366 | 21 | :param sysctl_file: path to the sysctl file to be saved | ||
3367 | 22 | :type sysctl_file: str or unicode | ||
3368 | 23 | :returns: None | ||
3369 | 24 | """ | ||
3370 | 25 | sysctl_dict = yaml.load(sysctl_dict) | ||
3371 | 26 | |||
3372 | 27 | with open(sysctl_file, "w") as fd: | ||
3373 | 28 | for key, value in sysctl_dict.items(): | ||
3374 | 29 | fd.write("{}={}\n".format(key, value)) | ||
3375 | 30 | |||
3376 | 31 | log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict), | ||
3377 | 32 | level=DEBUG) | ||
3378 | 33 | |||
3379 | 34 | check_call(["sysctl", "-p", sysctl_file]) | ||
3380 | 0 | 35 | ||
3381 | === modified file 'hooks/charmhelpers/core/templating.py' | |||
3382 | --- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:14 +0000 | |||
3383 | +++ hooks/charmhelpers/core/templating.py 2014-12-10 07:57:54 +0000 | |||
3384 | @@ -4,7 +4,8 @@ | |||
3385 | 4 | from charmhelpers.core import hookenv | 4 | from charmhelpers.core import hookenv |
3386 | 5 | 5 | ||
3387 | 6 | 6 | ||
3389 | 7 | def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): | 7 | def render(source, target, context, owner='root', group='root', |
3390 | 8 | perms=0o444, templates_dir=None): | ||
3391 | 8 | """ | 9 | """ |
3392 | 9 | Render a template. | 10 | Render a template. |
3393 | 10 | 11 | ||
3394 | 11 | 12 | ||
3395 | === modified file 'hooks/charmhelpers/fetch/__init__.py' | |||
3396 | --- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:57:43 +0000 | |||
3397 | +++ hooks/charmhelpers/fetch/__init__.py 2014-12-10 07:57:54 +0000 | |||
3398 | @@ -5,10 +5,6 @@ | |||
3399 | 5 | from charmhelpers.core.host import ( | 5 | from charmhelpers.core.host import ( |
3400 | 6 | lsb_release | 6 | lsb_release |
3401 | 7 | ) | 7 | ) |
3402 | 8 | from urlparse import ( | ||
3403 | 9 | urlparse, | ||
3404 | 10 | urlunparse, | ||
3405 | 11 | ) | ||
3406 | 12 | import subprocess | 8 | import subprocess |
3407 | 13 | from charmhelpers.core.hookenv import ( | 9 | from charmhelpers.core.hookenv import ( |
3408 | 14 | config, | 10 | config, |
3409 | @@ -16,6 +12,12 @@ | |||
3410 | 16 | ) | 12 | ) |
3411 | 17 | import os | 13 | import os |
3412 | 18 | 14 | ||
3413 | 15 | import six | ||
3414 | 16 | if six.PY3: | ||
3415 | 17 | from urllib.parse import urlparse, urlunparse | ||
3416 | 18 | else: | ||
3417 | 19 | from urlparse import urlparse, urlunparse | ||
3418 | 20 | |||
3419 | 19 | 21 | ||
3420 | 20 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | 22 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive |
3421 | 21 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | 23 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main |
3422 | @@ -72,6 +74,7 @@ | |||
3423 | 72 | FETCH_HANDLERS = ( | 74 | FETCH_HANDLERS = ( |
3424 | 73 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | 75 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', |
3425 | 74 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | 76 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', |
3426 | 77 | 'charmhelpers.fetch.giturl.GitUrlFetchHandler', | ||
3427 | 75 | ) | 78 | ) |
3428 | 76 | 79 | ||
3429 | 77 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. | 80 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. |
3430 | @@ -148,7 +151,7 @@ | |||
3431 | 148 | cmd = ['apt-get', '--assume-yes'] | 151 | cmd = ['apt-get', '--assume-yes'] |
3432 | 149 | cmd.extend(options) | 152 | cmd.extend(options) |
3433 | 150 | cmd.append('install') | 153 | cmd.append('install') |
3435 | 151 | if isinstance(packages, basestring): | 154 | if isinstance(packages, six.string_types): |
3436 | 152 | cmd.append(packages) | 155 | cmd.append(packages) |
3437 | 153 | else: | 156 | else: |
3438 | 154 | cmd.extend(packages) | 157 | cmd.extend(packages) |
3439 | @@ -181,7 +184,7 @@ | |||
3440 | 181 | def apt_purge(packages, fatal=False): | 184 | def apt_purge(packages, fatal=False): |
3441 | 182 | """Purge one or more packages""" | 185 | """Purge one or more packages""" |
3442 | 183 | cmd = ['apt-get', '--assume-yes', 'purge'] | 186 | cmd = ['apt-get', '--assume-yes', 'purge'] |
3444 | 184 | if isinstance(packages, basestring): | 187 | if isinstance(packages, six.string_types): |
3445 | 185 | cmd.append(packages) | 188 | cmd.append(packages) |
3446 | 186 | else: | 189 | else: |
3447 | 187 | cmd.extend(packages) | 190 | cmd.extend(packages) |
3448 | @@ -192,7 +195,7 @@ | |||
3449 | 192 | def apt_hold(packages, fatal=False): | 195 | def apt_hold(packages, fatal=False): |
3450 | 193 | """Hold one or more packages""" | 196 | """Hold one or more packages""" |
3451 | 194 | cmd = ['apt-mark', 'hold'] | 197 | cmd = ['apt-mark', 'hold'] |
3453 | 195 | if isinstance(packages, basestring): | 198 | if isinstance(packages, six.string_types): |
3454 | 196 | cmd.append(packages) | 199 | cmd.append(packages) |
3455 | 197 | else: | 200 | else: |
3456 | 198 | cmd.extend(packages) | 201 | cmd.extend(packages) |
3457 | @@ -218,6 +221,7 @@ | |||
3458 | 218 | pocket for the release. | 221 | pocket for the release. |
3459 | 219 | 'cloud:' may be used to activate official cloud archive pockets, | 222 | 'cloud:' may be used to activate official cloud archive pockets, |
3460 | 220 | such as 'cloud:icehouse' | 223 | such as 'cloud:icehouse' |
3461 | 224 | 'distro' may be used as a noop | ||
3462 | 221 | 225 | ||
3463 | 222 | @param key: A key to be added to the system's APT keyring and used | 226 | @param key: A key to be added to the system's APT keyring and used |
3464 | 223 | to verify the signatures on packages. Ideally, this should be an | 227 | to verify the signatures on packages. Ideally, this should be an |
3465 | @@ -251,12 +255,14 @@ | |||
3466 | 251 | release = lsb_release()['DISTRIB_CODENAME'] | 255 | release = lsb_release()['DISTRIB_CODENAME'] |
3467 | 252 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | 256 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
3468 | 253 | apt.write(PROPOSED_POCKET.format(release)) | 257 | apt.write(PROPOSED_POCKET.format(release)) |
3469 | 258 | elif source == 'distro': | ||
3470 | 259 | pass | ||
3471 | 254 | else: | 260 | else: |
3473 | 255 | raise SourceConfigError("Unknown source: {!r}".format(source)) | 261 | log("Unknown source: {!r}".format(source)) |
3474 | 256 | 262 | ||
3475 | 257 | if key: | 263 | if key: |
3476 | 258 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: | 264 | if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
3478 | 259 | with NamedTemporaryFile() as key_file: | 265 | with NamedTemporaryFile('w+') as key_file: |
3479 | 260 | key_file.write(key) | 266 | key_file.write(key) |
3480 | 261 | key_file.flush() | 267 | key_file.flush() |
3481 | 262 | key_file.seek(0) | 268 | key_file.seek(0) |
3482 | @@ -293,14 +299,14 @@ | |||
3483 | 293 | sources = safe_load((config(sources_var) or '').strip()) or [] | 299 | sources = safe_load((config(sources_var) or '').strip()) or [] |
3484 | 294 | keys = safe_load((config(keys_var) or '').strip()) or None | 300 | keys = safe_load((config(keys_var) or '').strip()) or None |
3485 | 295 | 301 | ||
3487 | 296 | if isinstance(sources, basestring): | 302 | if isinstance(sources, six.string_types): |
3488 | 297 | sources = [sources] | 303 | sources = [sources] |
3489 | 298 | 304 | ||
3490 | 299 | if keys is None: | 305 | if keys is None: |
3491 | 300 | for source in sources: | 306 | for source in sources: |
3492 | 301 | add_source(source, None) | 307 | add_source(source, None) |
3493 | 302 | else: | 308 | else: |
3495 | 303 | if isinstance(keys, basestring): | 309 | if isinstance(keys, six.string_types): |
3496 | 304 | keys = [keys] | 310 | keys = [keys] |
3497 | 305 | 311 | ||
3498 | 306 | if len(sources) != len(keys): | 312 | if len(sources) != len(keys): |
3499 | @@ -397,7 +403,7 @@ | |||
3500 | 397 | while result is None or result == APT_NO_LOCK: | 403 | while result is None or result == APT_NO_LOCK: |
3501 | 398 | try: | 404 | try: |
3502 | 399 | result = subprocess.check_call(cmd, env=env) | 405 | result = subprocess.check_call(cmd, env=env) |
3504 | 400 | except subprocess.CalledProcessError, e: | 406 | except subprocess.CalledProcessError as e: |
3505 | 401 | retry_count = retry_count + 1 | 407 | retry_count = retry_count + 1 |
3506 | 402 | if retry_count > APT_NO_LOCK_RETRY_COUNT: | 408 | if retry_count > APT_NO_LOCK_RETRY_COUNT: |
3507 | 403 | raise | 409 | raise |
3508 | 404 | 410 | ||
3509 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
3510 | --- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 21:57:43 +0000 | |||
3511 | +++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-10 07:57:54 +0000 | |||
3512 | @@ -1,8 +1,23 @@ | |||
3513 | 1 | import os | 1 | import os |
3514 | 2 | import urllib2 | ||
3515 | 3 | from urllib import urlretrieve | ||
3516 | 4 | import urlparse | ||
3517 | 5 | import hashlib | 2 | import hashlib |
3518 | 3 | import re | ||
3519 | 4 | |||
3520 | 5 | import six | ||
3521 | 6 | if six.PY3: | ||
3522 | 7 | from urllib.request import ( | ||
3523 | 8 | build_opener, install_opener, urlopen, urlretrieve, | ||
3524 | 9 | HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, | ||
3525 | 10 | ) | ||
3526 | 11 | from urllib.parse import urlparse, urlunparse, parse_qs | ||
3527 | 12 | from urllib.error import URLError | ||
3528 | 13 | else: | ||
3529 | 14 | from urllib import urlretrieve | ||
3530 | 15 | from urllib2 import ( | ||
3531 | 16 | build_opener, install_opener, urlopen, | ||
3532 | 17 | HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, | ||
3533 | 18 | URLError | ||
3534 | 19 | ) | ||
3535 | 20 | from urlparse import urlparse, urlunparse, parse_qs | ||
3536 | 6 | 21 | ||
3537 | 7 | from charmhelpers.fetch import ( | 22 | from charmhelpers.fetch import ( |
3538 | 8 | BaseFetchHandler, | 23 | BaseFetchHandler, |
3539 | @@ -15,6 +30,24 @@ | |||
3540 | 15 | from charmhelpers.core.host import mkdir, check_hash | 30 | from charmhelpers.core.host import mkdir, check_hash |
3541 | 16 | 31 | ||
3542 | 17 | 32 | ||
3543 | 33 | def splituser(host): | ||
3544 | 34 | '''urllib.splituser(), but six's support of this seems broken''' | ||
3545 | 35 | _userprog = re.compile('^(.*)@(.*)$') | ||
3546 | 36 | match = _userprog.match(host) | ||
3547 | 37 | if match: | ||
3548 | 38 | return match.group(1, 2) | ||
3549 | 39 | return None, host | ||
3550 | 40 | |||
3551 | 41 | |||
3552 | 42 | def splitpasswd(user): | ||
3553 | 43 | '''urllib.splitpasswd(), but six's support of this is missing''' | ||
3554 | 44 | _passwdprog = re.compile('^([^:]*):(.*)$', re.S) | ||
3555 | 45 | match = _passwdprog.match(user) | ||
3556 | 46 | if match: | ||
3557 | 47 | return match.group(1, 2) | ||
3558 | 48 | return user, None | ||
3559 | 49 | |||
3560 | 50 | |||
3561 | 18 | class ArchiveUrlFetchHandler(BaseFetchHandler): | 51 | class ArchiveUrlFetchHandler(BaseFetchHandler): |
3562 | 19 | """ | 52 | """ |
3563 | 20 | Handler to download archive files from arbitrary URLs. | 53 | Handler to download archive files from arbitrary URLs. |
3564 | @@ -42,20 +75,20 @@ | |||
3565 | 42 | """ | 75 | """ |
3566 | 43 | # propogate all exceptions | 76 | # propogate all exceptions |
3567 | 44 | # URLError, OSError, etc | 77 | # URLError, OSError, etc |
3569 | 45 | proto, netloc, path, params, query, fragment = urlparse.urlparse(source) | 78 | proto, netloc, path, params, query, fragment = urlparse(source) |
3570 | 46 | if proto in ('http', 'https'): | 79 | if proto in ('http', 'https'): |
3572 | 47 | auth, barehost = urllib2.splituser(netloc) | 80 | auth, barehost = splituser(netloc) |
3573 | 48 | if auth is not None: | 81 | if auth is not None: |
3577 | 49 | source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) | 82 | source = urlunparse((proto, barehost, path, params, query, fragment)) |
3578 | 50 | username, password = urllib2.splitpasswd(auth) | 83 | username, password = splitpasswd(auth) |
3579 | 51 | passman = urllib2.HTTPPasswordMgrWithDefaultRealm() | 84 | passman = HTTPPasswordMgrWithDefaultRealm() |
3580 | 52 | # Realm is set to None in add_password to force the username and password | 85 | # Realm is set to None in add_password to force the username and password |
3581 | 53 | # to be used whatever the realm | 86 | # to be used whatever the realm |
3582 | 54 | passman.add_password(None, source, username, password) | 87 | passman.add_password(None, source, username, password) |
3587 | 55 | authhandler = urllib2.HTTPBasicAuthHandler(passman) | 88 | authhandler = HTTPBasicAuthHandler(passman) |
3588 | 56 | opener = urllib2.build_opener(authhandler) | 89 | opener = build_opener(authhandler) |
3589 | 57 | urllib2.install_opener(opener) | 90 | install_opener(opener) |
3590 | 58 | response = urllib2.urlopen(source) | 91 | response = urlopen(source) |
3591 | 59 | try: | 92 | try: |
3592 | 60 | with open(dest, 'w') as dest_file: | 93 | with open(dest, 'w') as dest_file: |
3593 | 61 | dest_file.write(response.read()) | 94 | dest_file.write(response.read()) |
3594 | @@ -91,17 +124,21 @@ | |||
3595 | 91 | url_parts = self.parse_url(source) | 124 | url_parts = self.parse_url(source) |
3596 | 92 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | 125 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') |
3597 | 93 | if not os.path.exists(dest_dir): | 126 | if not os.path.exists(dest_dir): |
3599 | 94 | mkdir(dest_dir, perms=0755) | 127 | mkdir(dest_dir, perms=0o755) |
3600 | 95 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | 128 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) |
3601 | 96 | try: | 129 | try: |
3602 | 97 | self.download(source, dld_file) | 130 | self.download(source, dld_file) |
3604 | 98 | except urllib2.URLError as e: | 131 | except URLError as e: |
3605 | 99 | raise UnhandledSource(e.reason) | 132 | raise UnhandledSource(e.reason) |
3606 | 100 | except OSError as e: | 133 | except OSError as e: |
3607 | 101 | raise UnhandledSource(e.strerror) | 134 | raise UnhandledSource(e.strerror) |
3609 | 102 | options = urlparse.parse_qs(url_parts.fragment) | 135 | options = parse_qs(url_parts.fragment) |
3610 | 103 | for key, value in options.items(): | 136 | for key, value in options.items(): |
3612 | 104 | if key in hashlib.algorithms: | 137 | if not six.PY3: |
3613 | 138 | algorithms = hashlib.algorithms | ||
3614 | 139 | else: | ||
3615 | 140 | algorithms = hashlib.algorithms_available | ||
3616 | 141 | if key in algorithms: | ||
3617 | 105 | check_hash(dld_file, value, key) | 142 | check_hash(dld_file, value, key) |
3618 | 106 | if checksum: | 143 | if checksum: |
3619 | 107 | check_hash(dld_file, checksum, hash_type) | 144 | check_hash(dld_file, checksum, hash_type) |
3620 | 108 | 145 | ||
3621 | === modified file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
3622 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-07-28 14:38:51 +0000 | |||
3623 | +++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-10 07:57:54 +0000 | |||
3624 | @@ -5,6 +5,10 @@ | |||
3625 | 5 | ) | 5 | ) |
3626 | 6 | from charmhelpers.core.host import mkdir | 6 | from charmhelpers.core.host import mkdir |
3627 | 7 | 7 | ||
3628 | 8 | import six | ||
3629 | 9 | if six.PY3: | ||
3630 | 10 | raise ImportError('bzrlib does not support Python3') | ||
3631 | 11 | |||
3632 | 8 | try: | 12 | try: |
3633 | 9 | from bzrlib.branch import Branch | 13 | from bzrlib.branch import Branch |
3634 | 10 | except ImportError: | 14 | except ImportError: |
3635 | @@ -42,7 +46,7 @@ | |||
3636 | 42 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | 46 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
3637 | 43 | branch_name) | 47 | branch_name) |
3638 | 44 | if not os.path.exists(dest_dir): | 48 | if not os.path.exists(dest_dir): |
3640 | 45 | mkdir(dest_dir, perms=0755) | 49 | mkdir(dest_dir, perms=0o755) |
3641 | 46 | try: | 50 | try: |
3642 | 47 | self.branch(source, dest_dir) | 51 | self.branch(source, dest_dir) |
3643 | 48 | except OSError as e: | 52 | except OSError as e: |
3644 | 49 | 53 | ||
3645 | === added file 'hooks/charmhelpers/fetch/giturl.py' | |||
3646 | --- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000 | |||
3647 | +++ hooks/charmhelpers/fetch/giturl.py 2014-12-10 07:57:54 +0000 | |||
3648 | @@ -0,0 +1,51 @@ | |||
3649 | 1 | import os | ||
3650 | 2 | from charmhelpers.fetch import ( | ||
3651 | 3 | BaseFetchHandler, | ||
3652 | 4 | UnhandledSource | ||
3653 | 5 | ) | ||
3654 | 6 | from charmhelpers.core.host import mkdir | ||
3655 | 7 | |||
3656 | 8 | import six | ||
3657 | 9 | if six.PY3: | ||
3658 | 10 | raise ImportError('GitPython does not support Python 3') | ||
3659 | 11 | |||
3660 | 12 | try: | ||
3661 | 13 | from git import Repo | ||
3662 | 14 | except ImportError: | ||
3663 | 15 | from charmhelpers.fetch import apt_install | ||
3664 | 16 | apt_install("python-git") | ||
3665 | 17 | from git import Repo | ||
3666 | 18 | |||
3667 | 19 | |||
3668 | 20 | class GitUrlFetchHandler(BaseFetchHandler): | ||
3669 | 21 | """Handler for git branches via generic and github URLs""" | ||
3670 | 22 | def can_handle(self, source): | ||
3671 | 23 | url_parts = self.parse_url(source) | ||
3672 | 24 | # TODO (mattyw) no support for ssh git@ yet | ||
3673 | 25 | if url_parts.scheme not in ('http', 'https', 'git'): | ||
3674 | 26 | return False | ||
3675 | 27 | else: | ||
3676 | 28 | return True | ||
3677 | 29 | |||
3678 | 30 | def clone(self, source, dest, branch): | ||
3679 | 31 | if not self.can_handle(source): | ||
3680 | 32 | raise UnhandledSource("Cannot handle {}".format(source)) | ||
3681 | 33 | |||
3682 | 34 | repo = Repo.clone_from(source, dest) | ||
3683 | 35 | repo.git.checkout(branch) | ||
3684 | 36 | |||
3685 | 37 | def install(self, source, branch="master", dest=None): | ||
3686 | 38 | url_parts = self.parse_url(source) | ||
3687 | 39 | branch_name = url_parts.path.strip("/").split("/")[-1] | ||
3688 | 40 | if dest: | ||
3689 | 41 | dest_dir = os.path.join(dest, branch_name) | ||
3690 | 42 | else: | ||
3691 | 43 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | ||
3692 | 44 | branch_name) | ||
3693 | 45 | if not os.path.exists(dest_dir): | ||
3694 | 46 | mkdir(dest_dir, perms=0o755) | ||
3695 | 47 | try: | ||
3696 | 48 | self.clone(source, dest_dir, branch) | ||
3697 | 49 | except OSError as e: | ||
3698 | 50 | raise UnhandledSource(e.strerror) | ||
3699 | 51 | return dest_dir | ||
3700 | 0 | 52 | ||
3701 | === modified file 'hooks/nova_compute_hooks.py' | |||
3702 | --- hooks/nova_compute_hooks.py 2014-11-28 12:54:57 +0000 | |||
3703 | +++ hooks/nova_compute_hooks.py 2014-12-10 07:57:54 +0000 | |||
3704 | @@ -50,11 +50,12 @@ | |||
3705 | 50 | ceph_config_file, CEPH_SECRET, | 50 | ceph_config_file, CEPH_SECRET, |
3706 | 51 | enable_shell, disable_shell, | 51 | enable_shell, disable_shell, |
3707 | 52 | fix_path_ownership, | 52 | fix_path_ownership, |
3709 | 53 | assert_charm_supports_ipv6 | 53 | assert_charm_supports_ipv6, |
3710 | 54 | ) | 54 | ) |
3711 | 55 | 55 | ||
3712 | 56 | from charmhelpers.contrib.network.ip import ( | 56 | from charmhelpers.contrib.network.ip import ( |
3714 | 57 | get_ipv6_addr | 57 | get_ipv6_addr, |
3715 | 58 | configure_phy_nic_mtu | ||
3716 | 58 | ) | 59 | ) |
3717 | 59 | 60 | ||
3718 | 60 | from nova_compute_context import CEPH_SECRET_UUID | 61 | from nova_compute_context import CEPH_SECRET_UUID |
3719 | @@ -70,6 +71,7 @@ | |||
3720 | 70 | configure_installation_source(config('openstack-origin')) | 71 | configure_installation_source(config('openstack-origin')) |
3721 | 71 | apt_update() | 72 | apt_update() |
3722 | 72 | apt_install(determine_packages(), fatal=True) | 73 | apt_install(determine_packages(), fatal=True) |
3723 | 74 | configure_phy_nic_mtu() | ||
3724 | 73 | 75 | ||
3725 | 74 | 76 | ||
3726 | 75 | @hooks.hook('config-changed') | 77 | @hooks.hook('config-changed') |
3727 | @@ -103,6 +105,8 @@ | |||
3728 | 103 | 105 | ||
3729 | 104 | CONFIGS.write_all() | 106 | CONFIGS.write_all() |
3730 | 105 | 107 | ||
3731 | 108 | configure_phy_nic_mtu() | ||
3732 | 109 | |||
3733 | 106 | 110 | ||
3734 | 107 | @hooks.hook('amqp-relation-joined') | 111 | @hooks.hook('amqp-relation-joined') |
3735 | 108 | def amqp_joined(relation_id=None): | 112 | def amqp_joined(relation_id=None): |
3736 | 109 | 113 | ||
3737 | === modified file 'hooks/nova_compute_utils.py' | |||
3738 | --- hooks/nova_compute_utils.py 2014-12-03 23:54:27 +0000 | |||
3739 | +++ hooks/nova_compute_utils.py 2014-12-10 07:57:54 +0000 | |||
3740 | @@ -14,7 +14,7 @@ | |||
3741 | 14 | from charmhelpers.core.host import ( | 14 | from charmhelpers.core.host import ( |
3742 | 15 | mkdir, | 15 | mkdir, |
3743 | 16 | service_restart, | 16 | service_restart, |
3745 | 17 | lsb_release | 17 | lsb_release, |
3746 | 18 | ) | 18 | ) |
3747 | 19 | 19 | ||
3748 | 20 | from charmhelpers.core.hookenv import ( | 20 | from charmhelpers.core.hookenv import ( |
3749 | 21 | 21 | ||
3750 | === modified file 'unit_tests/test_nova_compute_hooks.py' | |||
3751 | --- unit_tests/test_nova_compute_hooks.py 2014-11-28 14:17:10 +0000 | |||
3752 | +++ unit_tests/test_nova_compute_hooks.py 2014-12-10 07:57:54 +0000 | |||
3753 | @@ -54,7 +54,8 @@ | |||
3754 | 54 | 'ensure_ceph_keyring', | 54 | 'ensure_ceph_keyring', |
3755 | 55 | 'execd_preinstall', | 55 | 'execd_preinstall', |
3756 | 56 | # socket | 56 | # socket |
3758 | 57 | 'gethostname' | 57 | 'gethostname', |
3759 | 58 | 'configure_phy_nic_mtu' | ||
3760 | 58 | ] | 59 | ] |
3761 | 59 | 60 | ||
3762 | 60 | 61 | ||
3763 | @@ -80,11 +81,13 @@ | |||
3764 | 80 | self.assertTrue(self.apt_update.called) | 81 | self.assertTrue(self.apt_update.called) |
3765 | 81 | self.apt_install.assert_called_with(['foo', 'bar'], fatal=True) | 82 | self.apt_install.assert_called_with(['foo', 'bar'], fatal=True) |
3766 | 82 | self.execd_preinstall.assert_called() | 83 | self.execd_preinstall.assert_called() |
3767 | 84 | self.assertTrue(self.configure_phy_nic_mtu.called) | ||
3768 | 83 | 85 | ||
3769 | 84 | def test_config_changed_with_upgrade(self): | 86 | def test_config_changed_with_upgrade(self): |
3770 | 85 | self.openstack_upgrade_available.return_value = True | 87 | self.openstack_upgrade_available.return_value = True |
3771 | 86 | hooks.config_changed() | 88 | hooks.config_changed() |
3772 | 87 | self.assertTrue(self.do_openstack_upgrade.called) | 89 | self.assertTrue(self.do_openstack_upgrade.called) |
3773 | 90 | self.assertTrue(self.configure_phy_nic_mtu.called) | ||
3774 | 88 | 91 | ||
3775 | 89 | @patch.object(hooks, 'compute_joined') | 92 | @patch.object(hooks, 'compute_joined') |
3776 | 90 | def test_config_changed_with_migration(self, compute_joined): | 93 | def test_config_changed_with_migration(self, compute_joined): |
UOSCI bot says:
charm_lint_check #1213 nova-compute-next for zhhuabj mp242611
LINT OK: passed
LINT Results (max last 5 lines): neutron- security- groups has no default value
I: config.yaml: option os-data-network has no default value
I: config.yaml: option config-flags has no default value
I: config.yaml: option instances-path has no default value
W: config.yaml: option disable-
I: config.yaml: option migration-auth-type has no default value
Full lint test output: http:// paste.ubuntu. com/9206996/ 10.98.191. 181:8080/ job/charm_ lint_check/ 1213/
Build: http://