Merge lp:~chad.smith/charms/precise/keystone/ha-support into lp:~charmers/charms/precise/keystone/trunk
- Precise Pangolin (12.04)
- ha-support
- Merge into trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~chad.smith/charms/precise/keystone/ha-support |
Merge into: | lp:~charmers/charms/precise/keystone/trunk |
Diff against target: |
1009 lines (+554/-155) 11 files modified
add_to_cluster (+2/-0) config.yaml (+37/-0) health_checks.d/service_ports_live (+13/-0) health_checks.d/service_running (+13/-0) hooks/keystone-hooks (+184/-38) hooks/lib/openstack_common.py (+72/-4) hooks/utils.py (+189/-112) metadata.yaml (+6/-0) remove_from_cluster (+2/-0) revision (+1/-1) templates/haproxy.cfg (+35/-0) |
To merge this branch: | bzr merge lp:~chad.smith/charms/precise/keystone/ha-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Adam Gandelman | Pending | ||
Review via email: mp+149883@code.launchpad.net |
This proposal has been superseded by a proposal from 2013-02-22.
Commit message
Description of the change
This is a merge proposal to establish a potential template for health_script.d, add_to_cluster and remove_from_cluster delivery within the ha-supported openstack charms.
This is intended as a talking point and guidance for what landscape-client will expect on openstack charms supporting an HA services. If there is anything that doesn't look acceptable, I'm game for a change but wanted to pull a template together to seed discussions. Any suggestions about better charm-related ways.
These script locations will be hard-coded within landscape-client and will be run during rolling Openstack upgrades. So, prior to ODS, we'd like to make sure this process makes sense for most haclustered services.
The example of landscape-client's interaction with charm scripts is our HAServiceManager plugin at https:/
1. Before any packages are upgraded, landscape-client will only call /var/lib/
- remove_from_cluster should will place the node in crm standby state, thereby migrating any active HA resources off of the current node.
2. landscape-client will upgrade any packages and any additional package dependencies will be installed.
3. Upon successful upgrade (maybe involving a reboot), landscape-client will run all run-parts health scripts from /var/lib/
4. If all health scripts succeed, landscape-client will run /var/lib/
As James mentioned in email, remove_from_cluster script might take more action to cleanly remove an API service from openstack schedulers or configs. If a charm's remove_from_cluster does more than just migrate HA resources away from the node, then we'll need to discuss how to potentially decompose add_to_cluster functionality into separate scripts. One script that allows us to ensure a local openstack API is running locally and can be "health checked" and a separate script add_to_cluster which only finalizes and activates configuration in the HA service.
Adam Gandelman (gandelman-a) wrote : | # |
Chad Smith (chad.smith) wrote : | # |
Good points Adam. I'll make save_script_rc more generic and the health scripts will be a bit smarter to look for *OPENSTACK* vars where necessary.
I'm pulled onto something else at the moment, but I'll repost this review against the right source branch and get a change to you either today or tomorrow morning.
- 56. By Chad Smith
-
move now-generic save_script_rc to lib/openstack_
common. py. Update health scripts to be more flexible based on OPENSTACK_PORT* and OPENSTACK_SERVICE* environment variables - 57. By Chad Smith
-
move add_cluster, remove_from_cluster and health_checks.d to a new scripts subdir
- 58. By Chad Smith
-
more strict netstat port matching
Unmerged revisions
Preview Diff
1 | === added file 'add_to_cluster' | |||
2 | --- add_to_cluster 1970-01-01 00:00:00 +0000 | |||
3 | +++ add_to_cluster 2013-02-22 19:26:19 +0000 | |||
4 | @@ -0,0 +1,2 @@ | |||
5 | 1 | #!/bin/bash | ||
6 | 2 | crm node online | ||
7 | 0 | 3 | ||
8 | === modified file 'config.yaml' | |||
9 | --- config.yaml 2012-10-12 17:26:48 +0000 | |||
10 | +++ config.yaml 2013-02-22 19:26:19 +0000 | |||
11 | @@ -26,6 +26,10 @@ | |||
12 | 26 | default: "/etc/keystone/keystone.conf" | 26 | default: "/etc/keystone/keystone.conf" |
13 | 27 | type: string | 27 | type: string |
14 | 28 | description: "Location of keystone configuration file" | 28 | description: "Location of keystone configuration file" |
15 | 29 | log-level: | ||
16 | 30 | default: WARNING | ||
17 | 31 | type: string | ||
18 | 32 | description: Log level (WARNING, INFO, DEBUG, ERROR) | ||
19 | 29 | service-port: | 33 | service-port: |
20 | 30 | default: 5000 | 34 | default: 5000 |
21 | 31 | type: int | 35 | type: int |
22 | @@ -75,3 +79,36 @@ | |||
23 | 75 | default: "keystone" | 79 | default: "keystone" |
24 | 76 | type: string | 80 | type: string |
25 | 77 | description: "Database username" | 81 | description: "Database username" |
26 | 82 | region: | ||
27 | 83 | default: RegionOne | ||
28 | 84 | type: string | ||
29 | 85 | description: "OpenStack Region(s) - separate multiple regions with single space" | ||
30 | 86 | # HA configuration settings | ||
31 | 87 | vip: | ||
32 | 88 | type: string | ||
33 | 89 | description: "Virtual IP to use to front keystone in ha configuration" | ||
34 | 90 | vip_iface: | ||
35 | 91 | type: string | ||
36 | 92 | default: eth0 | ||
37 | 93 | description: "Network Interface where to place the Virtual IP" | ||
38 | 94 | vip_cidr: | ||
39 | 95 | type: int | ||
40 | 96 | default: 24 | ||
41 | 97 | description: "Netmask that will be used for the Virtual IP" | ||
42 | 98 | ha-bindiface: | ||
43 | 99 | type: string | ||
44 | 100 | default: eth0 | ||
45 | 101 | description: | | ||
46 | 102 | Default network interface on which HA cluster will bind to communication | ||
47 | 103 | with the other members of the HA Cluster. | ||
48 | 104 | ha-mcastport: | ||
49 | 105 | type: int | ||
50 | 106 | default: 5403 | ||
51 | 107 | description: | | ||
52 | 108 | Default multicast port number that will be used to communicate between | ||
53 | 109 | HA Cluster nodes. | ||
54 | 110 | # PKI enablement and configuration (Grizzly and beyond) | ||
55 | 111 | enable-pki: | ||
56 | 112 | default: "false" | ||
57 | 113 | type: string | ||
58 | 114 | description: "Enable PKI token signing (Grizzly and beyond)" | ||
59 | 78 | 115 | ||
60 | === added directory 'health_checks.d' | |||
61 | === added file 'health_checks.d/service_ports_live' | |||
62 | --- health_checks.d/service_ports_live 1970-01-01 00:00:00 +0000 | |||
63 | +++ health_checks.d/service_ports_live 2013-02-22 19:26:19 +0000 | |||
64 | @@ -0,0 +1,13 @@ | |||
65 | 1 | #!/bin/bash | ||
66 | 2 | # Validate that service ports are active | ||
67 | 3 | HEALTH_DIR=`dirname $0` | ||
68 | 4 | UNIT_DIR=`dirname $HEALTH_DIR` | ||
69 | 5 | . $UNIT_DIR/scriptrc | ||
70 | 6 | set -e | ||
71 | 7 | |||
72 | 8 | # Grab any OPENSTACK_PORT* environment variables | ||
73 | 9 | openstack_ports=`env| awk -F '=' '(/OPENSTACK_PORT/){print $2}'` | ||
74 | 10 | for port in $openstack_ports | ||
75 | 11 | do | ||
76 | 12 | netstat -ln | grep -q $port | ||
77 | 13 | done | ||
78 | 0 | 14 | ||
79 | === added file 'health_checks.d/service_running' | |||
80 | --- health_checks.d/service_running 1970-01-01 00:00:00 +0000 | |||
81 | +++ health_checks.d/service_running 2013-02-22 19:26:19 +0000 | |||
82 | @@ -0,0 +1,13 @@ | |||
83 | 1 | #!/bin/bash | ||
84 | 2 | # Validate that service is running | ||
85 | 3 | HEALTH_DIR=`dirname $0` | ||
86 | 4 | UNIT_DIR=`dirname $HEALTH_DIR` | ||
87 | 5 | . $UNIT_DIR/scriptrc | ||
88 | 6 | set -e | ||
89 | 7 | |||
90 | 8 | # Grab any OPENSTACK_SERVICE* environment variables | ||
91 | 9 | openstack_service_names=`env| awk -F '=' '(/OPENSTACK_SERVICE/){print $2}'` | ||
92 | 10 | for service_name in $openstack_service_names | ||
93 | 11 | do | ||
94 | 12 | service $service_name status | grep -q running | ||
95 | 13 | done | ||
96 | 0 | 14 | ||
97 | === added symlink 'hooks/cluster-relation-changed' | |||
98 | === target is u'keystone-hooks' | |||
99 | === added symlink 'hooks/cluster-relation-departed' | |||
100 | === target is u'keystone-hooks' | |||
101 | === added symlink 'hooks/ha-relation-changed' | |||
102 | === target is u'keystone-hooks' | |||
103 | === added symlink 'hooks/ha-relation-joined' | |||
104 | === target is u'keystone-hooks' | |||
105 | === modified file 'hooks/keystone-hooks' | |||
106 | --- hooks/keystone-hooks 2012-12-12 03:52:01 +0000 | |||
107 | +++ hooks/keystone-hooks 2013-02-22 19:26:19 +0000 | |||
108 | @@ -8,7 +8,7 @@ | |||
109 | 8 | 8 | ||
110 | 9 | config = config_get() | 9 | config = config_get() |
111 | 10 | 10 | ||
113 | 11 | packages = "keystone python-mysqldb pwgen" | 11 | packages = "keystone python-mysqldb pwgen haproxy python-jinja2" |
114 | 12 | service = "keystone" | 12 | service = "keystone" |
115 | 13 | 13 | ||
116 | 14 | # used to verify joined services are valid openstack components. | 14 | # used to verify joined services are valid openstack components. |
117 | @@ -46,6 +46,14 @@ | |||
118 | 46 | "quantum": { | 46 | "quantum": { |
119 | 47 | "type": "network", | 47 | "type": "network", |
120 | 48 | "desc": "Quantum Networking Service" | 48 | "desc": "Quantum Networking Service" |
121 | 49 | }, | ||
122 | 50 | "oxygen": { | ||
123 | 51 | "type": "oxygen", | ||
124 | 52 | "desc": "Oxygen Cloud Image Service" | ||
125 | 53 | }, | ||
126 | 54 | "ceilometer": { | ||
127 | 55 | "type": "metering", | ||
128 | 56 | "desc": "Ceilometer Metering Service" | ||
129 | 49 | } | 57 | } |
130 | 50 | } | 58 | } |
131 | 51 | 59 | ||
132 | @@ -69,12 +77,14 @@ | |||
133 | 69 | driver='keystone.token.backends.sql.Token') | 77 | driver='keystone.token.backends.sql.Token') |
134 | 70 | update_config_block('ec2', | 78 | update_config_block('ec2', |
135 | 71 | driver='keystone.contrib.ec2.backends.sql.Ec2') | 79 | driver='keystone.contrib.ec2.backends.sql.Ec2') |
136 | 80 | |||
137 | 72 | execute("service keystone stop", echo=True) | 81 | execute("service keystone stop", echo=True) |
138 | 73 | execute("keystone-manage db_sync") | 82 | execute("keystone-manage db_sync") |
139 | 74 | execute("service keystone start", echo=True) | 83 | execute("service keystone start", echo=True) |
140 | 75 | time.sleep(5) | 84 | time.sleep(5) |
141 | 76 | ensure_initial_admin(config) | 85 | ensure_initial_admin(config) |
142 | 77 | 86 | ||
143 | 87 | |||
144 | 78 | def db_joined(): | 88 | def db_joined(): |
145 | 79 | relation_data = { "database": config["database"], | 89 | relation_data = { "database": config["database"], |
146 | 80 | "username": config["database-user"], | 90 | "username": config["database-user"], |
147 | @@ -84,15 +94,22 @@ | |||
148 | 84 | def db_changed(): | 94 | def db_changed(): |
149 | 85 | relation_data = relation_get_dict() | 95 | relation_data = relation_get_dict() |
150 | 86 | if ('password' not in relation_data or | 96 | if ('password' not in relation_data or |
153 | 87 | 'private-address' not in relation_data): | 97 | 'db_host' not in relation_data): |
154 | 88 | juju_log("private-address or password not set. Peer not ready, exit 0") | 98 | juju_log("db_host or password not set. Peer not ready, exit 0") |
155 | 89 | exit(0) | 99 | exit(0) |
156 | 90 | update_config_block('sql', connection="mysql://%s:%s@%s/%s" % | 100 | update_config_block('sql', connection="mysql://%s:%s@%s/%s" % |
157 | 91 | (config["database-user"], | 101 | (config["database-user"], |
158 | 92 | relation_data["password"], | 102 | relation_data["password"], |
160 | 93 | relation_data["private-address"], | 103 | relation_data["db_host"], |
161 | 94 | config["database"])) | 104 | config["database"])) |
162 | 105 | |||
163 | 95 | execute("service keystone stop", echo=True) | 106 | execute("service keystone stop", echo=True) |
164 | 107 | |||
165 | 108 | if not eligible_leader(): | ||
166 | 109 | juju_log('Deferring DB initialization to service leader.') | ||
167 | 110 | execute("service keystone start") | ||
168 | 111 | return | ||
169 | 112 | |||
170 | 96 | execute("keystone-manage db_sync", echo=True) | 113 | execute("keystone-manage db_sync", echo=True) |
171 | 97 | execute("service keystone start") | 114 | execute("service keystone start") |
172 | 98 | time.sleep(5) | 115 | time.sleep(5) |
173 | @@ -124,18 +141,30 @@ | |||
174 | 124 | realtion_set({ "admin_token": -1 }) | 141 | realtion_set({ "admin_token": -1 }) |
175 | 125 | return | 142 | return |
176 | 126 | 143 | ||
178 | 127 | def add_endpoint(region, service, public_url, admin_url, internal_url): | 144 | def add_endpoint(region, service, publicurl, adminurl, internalurl): |
179 | 128 | desc = valid_services[service]["desc"] | 145 | desc = valid_services[service]["desc"] |
180 | 129 | service_type = valid_services[service]["type"] | 146 | service_type = valid_services[service]["type"] |
181 | 130 | create_service_entry(service, service_type, desc) | 147 | create_service_entry(service, service_type, desc) |
182 | 131 | create_endpoint_template(region=region, service=service, | 148 | create_endpoint_template(region=region, service=service, |
186 | 132 | public_url=public_url, | 149 | publicurl=publicurl, |
187 | 133 | admin_url=admin_url, | 150 | adminurl=adminurl, |
188 | 134 | internal_url=internal_url) | 151 | internalurl=internalurl) |
189 | 152 | |||
190 | 153 | if not eligible_leader(): | ||
191 | 154 | juju_log('Deferring identity_changed() to service leader.') | ||
192 | 155 | return | ||
193 | 135 | 156 | ||
194 | 136 | settings = relation_get_dict(relation_id=relation_id, | 157 | settings = relation_get_dict(relation_id=relation_id, |
195 | 137 | remote_unit=remote_unit) | 158 | remote_unit=remote_unit) |
196 | 138 | 159 | ||
197 | 160 | # Allow the remote service to request creation of any additional roles. | ||
198 | 161 | # Currently used by Swift. | ||
199 | 162 | if 'requested_roles' in settings and settings['requested_roles'] != 'None': | ||
200 | 163 | roles = settings['requested_roles'].split(',') | ||
201 | 164 | juju_log("Creating requested roles: %s" % roles) | ||
202 | 165 | for role in roles: | ||
203 | 166 | create_role(role, user=config['admin-user'], tenant='admin') | ||
204 | 167 | |||
205 | 139 | # the minimum settings needed per endpoint | 168 | # the minimum settings needed per endpoint |
206 | 140 | single = set(['service', 'region', 'public_url', 'admin_url', | 169 | single = set(['service', 'region', 'public_url', 'admin_url', |
207 | 141 | 'internal_url']) | 170 | 'internal_url']) |
208 | @@ -145,13 +174,27 @@ | |||
209 | 145 | if 'None' in [v for k,v in settings.iteritems()]: | 174 | if 'None' in [v for k,v in settings.iteritems()]: |
210 | 146 | # Some backend services advertise no endpoint but require a | 175 | # Some backend services advertise no endpoint but require a |
211 | 147 | # hook execution to update auth strategy. | 176 | # hook execution to update auth strategy. |
212 | 177 | relation_data = {} | ||
213 | 178 | # Check if clustered and use vip + haproxy ports if so | ||
214 | 179 | if is_clustered(): | ||
215 | 180 | relation_data["auth_host"] = config['vip'] | ||
216 | 181 | relation_data["auth_port"] = SERVICE_PORTS['keystone_admin'] | ||
217 | 182 | relation_data["service_host"] = config['vip'] | ||
218 | 183 | relation_data["service_port"] = SERVICE_PORTS['keystone_service'] | ||
219 | 184 | else: | ||
220 | 185 | relation_data["auth_host"] = config['hostname'] | ||
221 | 186 | relation_data["auth_port"] = config['auth-port'] | ||
222 | 187 | relation_data["service_host"] = config['hostname'] | ||
223 | 188 | relation_data["service_port"] = config['service-port'] | ||
224 | 189 | relation_set(relation_data) | ||
225 | 148 | return | 190 | return |
226 | 149 | 191 | ||
227 | 192 | |||
228 | 150 | ensure_valid_service(settings['service']) | 193 | ensure_valid_service(settings['service']) |
229 | 151 | add_endpoint(region=settings['region'], service=settings['service'], | 194 | add_endpoint(region=settings['region'], service=settings['service'], |
233 | 152 | public_url=settings['public_url'], | 195 | publicurl=settings['public_url'], |
234 | 153 | admin_url=settings['admin_url'], | 196 | adminurl=settings['admin_url'], |
235 | 154 | internal_url=settings['internal_url']) | 197 | internalurl=settings['internal_url']) |
236 | 155 | service_username = settings['service'] | 198 | service_username = settings['service'] |
237 | 156 | else: | 199 | else: |
238 | 157 | # assemble multiple endpoints from relation data. service name | 200 | # assemble multiple endpoints from relation data. service name |
239 | @@ -186,9 +229,9 @@ | |||
240 | 186 | ep = endpoints[ep] | 229 | ep = endpoints[ep] |
241 | 187 | ensure_valid_service(ep['service']) | 230 | ensure_valid_service(ep['service']) |
242 | 188 | add_endpoint(region=ep['region'], service=ep['service'], | 231 | add_endpoint(region=ep['region'], service=ep['service'], |
246 | 189 | public_url=ep['public_url'], | 232 | publicurl=ep['public_url'], |
247 | 190 | admin_url=ep['admin_url'], | 233 | adminurl=ep['admin_url'], |
248 | 191 | internal_url=ep['internal_url']) | 234 | internalurl=ep['internal_url']) |
249 | 192 | services.append(ep['service']) | 235 | services.append(ep['service']) |
250 | 193 | service_username = '_'.join(services) | 236 | service_username = '_'.join(services) |
251 | 194 | 237 | ||
252 | @@ -201,26 +244,10 @@ | |||
253 | 201 | token = get_admin_token() | 244 | token = get_admin_token() |
254 | 202 | juju_log("Creating service credentials for '%s'" % service_username) | 245 | juju_log("Creating service credentials for '%s'" % service_username) |
255 | 203 | 246 | ||
265 | 204 | stored_passwd = '/var/lib/keystone/%s.passwd' % service_username | 247 | service_password = get_service_password(service_username) |
257 | 205 | if os.path.isfile(stored_passwd): | ||
258 | 206 | juju_log("Loading stored service passwd from %s" % stored_passwd) | ||
259 | 207 | service_password = open(stored_passwd, 'r').readline().strip('\n') | ||
260 | 208 | else: | ||
261 | 209 | juju_log("Generating a new service password for %s" % service_username) | ||
262 | 210 | service_password = execute('pwgen -c 32 1', die=True)[0].strip() | ||
263 | 211 | open(stored_passwd, 'w+').writelines("%s\n" % service_password) | ||
264 | 212 | |||
266 | 213 | create_user(service_username, service_password, config['service-tenant']) | 248 | create_user(service_username, service_password, config['service-tenant']) |
267 | 214 | grant_role(service_username, config['admin-role'], config['service-tenant']) | 249 | grant_role(service_username, config['admin-role'], config['service-tenant']) |
268 | 215 | 250 | ||
269 | 216 | # Allow the remote service to request creation of any additional roles. | ||
270 | 217 | # Currently used by Swift. | ||
271 | 218 | if 'requested_roles' in settings: | ||
272 | 219 | roles = settings['requested_roles'].split(',') | ||
273 | 220 | juju_log("Creating requested roles: %s" % roles) | ||
274 | 221 | for role in roles: | ||
275 | 222 | create_role(role, user=config['admin-user'], tenant='admin') | ||
276 | 223 | |||
277 | 224 | # As of https://review.openstack.org/#change,4675, all nodes hosting | 251 | # As of https://review.openstack.org/#change,4675, all nodes hosting |
278 | 225 | # an endpoint(s) needs a service username and password assigned to | 252 | # an endpoint(s) needs a service username and password assigned to |
279 | 226 | # the service tenant and granted admin role. | 253 | # the service tenant and granted admin role. |
280 | @@ -237,7 +264,15 @@ | |||
281 | 237 | "service_password": service_password, | 264 | "service_password": service_password, |
282 | 238 | "service_tenant": config['service-tenant'] | 265 | "service_tenant": config['service-tenant'] |
283 | 239 | } | 266 | } |
284 | 267 | # Check if clustered and use vip + haproxy ports if so | ||
285 | 268 | if is_clustered(): | ||
286 | 269 | relation_data["auth_host"] = config['vip'] | ||
287 | 270 | relation_data["auth_port"] = SERVICE_PORTS['keystone_admin'] | ||
288 | 271 | relation_data["service_host"] = config['vip'] | ||
289 | 272 | relation_data["service_port"] = SERVICE_PORTS['keystone_service'] | ||
290 | 273 | |||
291 | 240 | relation_set(relation_data) | 274 | relation_set(relation_data) |
292 | 275 | synchronize_service_credentials() | ||
293 | 241 | 276 | ||
294 | 242 | def config_changed(): | 277 | def config_changed(): |
295 | 243 | 278 | ||
296 | @@ -246,11 +281,117 @@ | |||
297 | 246 | available = get_os_codename_install_source(config['openstack-origin']) | 281 | available = get_os_codename_install_source(config['openstack-origin']) |
298 | 247 | installed = get_os_codename_package('keystone') | 282 | installed = get_os_codename_package('keystone') |
299 | 248 | 283 | ||
301 | 249 | if get_os_version_codename(available) > get_os_version_codename(installed): | 284 | if (available and |
302 | 285 | get_os_version_codename(available) > get_os_version_codename(installed)): | ||
303 | 250 | do_openstack_upgrade(config['openstack-origin'], packages) | 286 | do_openstack_upgrade(config['openstack-origin'], packages) |
304 | 251 | 287 | ||
305 | 288 | env_vars = {'OPENSTACK_SERVICE_KEYSTONE': 'keystone', | ||
306 | 289 | 'OPENSTACK_PORT_ADMIN': config['admin-port'], | ||
307 | 290 | 'OPENSTACK_PORT_PUBLIC': config['service-port']} | ||
308 | 291 | save_script_rc(**env_vars) | ||
309 | 292 | |||
310 | 252 | set_admin_token(config['admin-token']) | 293 | set_admin_token(config['admin-token']) |
312 | 253 | ensure_initial_admin(config) | 294 | |
313 | 295 | if eligible_leader(): | ||
314 | 296 | juju_log('Cluster leader - ensuring endpoint configuration is up to date') | ||
315 | 297 | ensure_initial_admin(config) | ||
316 | 298 | |||
317 | 299 | update_config_block('logger_root', level=config['log-level'], | ||
318 | 300 | file='/etc/keystone/logging.conf') | ||
319 | 301 | if get_os_version_package('keystone') >= '2013.1': | ||
320 | 302 | # PKI introduced in Grizzly | ||
321 | 303 | configure_pki_tokens(config) | ||
322 | 304 | |||
323 | 305 | execute("service keystone restart", echo=True) | ||
324 | 306 | cluster_changed() | ||
325 | 307 | |||
326 | 308 | |||
327 | 309 | def upgrade_charm(): | ||
328 | 310 | cluster_changed() | ||
329 | 311 | if eligible_leader(): | ||
330 | 312 | juju_log('Cluster leader - ensuring endpoint configuration is up to date') | ||
331 | 313 | ensure_initial_admin(config) | ||
332 | 314 | |||
333 | 315 | |||
334 | 316 | SERVICE_PORTS = { | ||
335 | 317 | "keystone_admin": int(config['admin-port']) + 1, | ||
336 | 318 | "keystone_service": int(config['service-port']) + 1 | ||
337 | 319 | } | ||
338 | 320 | |||
339 | 321 | |||
340 | 322 | def cluster_changed(): | ||
341 | 323 | cluster_hosts = {} | ||
342 | 324 | cluster_hosts['self'] = config['hostname'] | ||
343 | 325 | for r_id in relation_ids('cluster'): | ||
344 | 326 | for unit in relation_list(r_id): | ||
345 | 327 | cluster_hosts[unit.replace('/','-')] = \ | ||
346 | 328 | relation_get_dict(relation_id=r_id, | ||
347 | 329 | remote_unit=unit)['private-address'] | ||
348 | 330 | configure_haproxy(cluster_hosts, | ||
349 | 331 | SERVICE_PORTS) | ||
350 | 332 | |||
351 | 333 | synchronize_service_credentials() | ||
352 | 334 | |||
353 | 335 | |||
354 | 336 | def ha_relation_changed(): | ||
355 | 337 | relation_data = relation_get_dict() | ||
356 | 338 | if ('clustered' in relation_data and | ||
357 | 339 | is_leader()): | ||
358 | 340 | juju_log('Cluster configured, notifying other services and updating' | ||
359 | 341 | 'keystone endpoint configuration') | ||
360 | 342 | # Update keystone endpoint to point at VIP | ||
361 | 343 | ensure_initial_admin(config) | ||
362 | 344 | # Tell all related services to start using | ||
363 | 345 | # the VIP and haproxy ports instead | ||
364 | 346 | for r_id in relation_ids('identity-service'): | ||
365 | 347 | relation_set_2(rid=r_id, | ||
366 | 348 | auth_host=config['vip'], | ||
367 | 349 | service_host=config['vip'], | ||
368 | 350 | service_port=SERVICE_PORTS['keystone_service'], | ||
369 | 351 | auth_port=SERVICE_PORTS['keystone_admin']) | ||
370 | 352 | |||
371 | 353 | |||
372 | 354 | def ha_relation_joined(): | ||
373 | 355 | # Obtain the config values necessary for the cluster config. These | ||
374 | 356 | # include multicast port and interface to bind to. | ||
375 | 357 | corosync_bindiface = config['ha-bindiface'] | ||
376 | 358 | corosync_mcastport = config['ha-mcastport'] | ||
377 | 359 | |||
378 | 360 | # Obtain resources | ||
379 | 361 | resources = { | ||
380 | 362 | 'res_ks_vip':'ocf:heartbeat:IPaddr2', | ||
381 | 363 | 'res_ks_haproxy':'lsb:haproxy' | ||
382 | 364 | } | ||
383 | 365 | # TODO: Obtain netmask and nic where to place VIP. | ||
384 | 366 | resource_params = { | ||
385 | 367 | 'res_ks_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' % (config['vip'], | ||
386 | 368 | config['vip_cidr'], config['vip_iface']), | ||
387 | 369 | 'res_ks_haproxy':'op monitor interval="5s"' | ||
388 | 370 | } | ||
389 | 371 | init_services = { | ||
390 | 372 | 'res_ks_haproxy':'haproxy' | ||
391 | 373 | } | ||
392 | 374 | groups = { | ||
393 | 375 | 'grp_ks_haproxy':'res_ks_vip res_ks_haproxy' | ||
394 | 376 | } | ||
395 | 377 | #clones = { | ||
396 | 378 | # 'cln_ks_haproxy':'res_ks_haproxy meta globally-unique="false" interleave="true"' | ||
397 | 379 | # } | ||
398 | 380 | |||
399 | 381 | #orders = { | ||
400 | 382 | # 'ord_vip_before_haproxy':'inf: res_ks_vip res_ks_haproxy' | ||
401 | 383 | # } | ||
402 | 384 | #colocations = { | ||
403 | 385 | # 'col_vip_on_haproxy':'inf: res_ks_haproxy res_ks_vip' | ||
404 | 386 | # } | ||
405 | 387 | |||
406 | 388 | relation_set_2(init_services=init_services, | ||
407 | 389 | corosync_bindiface=corosync_bindiface, | ||
408 | 390 | corosync_mcastport=corosync_mcastport, | ||
409 | 391 | resources=resources, | ||
410 | 392 | resource_params=resource_params, | ||
411 | 393 | groups=groups) | ||
412 | 394 | |||
413 | 254 | 395 | ||
414 | 255 | hooks = { | 396 | hooks = { |
415 | 256 | "install": install_hook, | 397 | "install": install_hook, |
416 | @@ -258,12 +399,17 @@ | |||
417 | 258 | "shared-db-relation-changed": db_changed, | 399 | "shared-db-relation-changed": db_changed, |
418 | 259 | "identity-service-relation-joined": identity_joined, | 400 | "identity-service-relation-joined": identity_joined, |
419 | 260 | "identity-service-relation-changed": identity_changed, | 401 | "identity-service-relation-changed": identity_changed, |
421 | 261 | "config-changed": config_changed | 402 | "config-changed": config_changed, |
422 | 403 | "cluster-relation-changed": cluster_changed, | ||
423 | 404 | "cluster-relation-departed": cluster_changed, | ||
424 | 405 | "ha-relation-joined": ha_relation_joined, | ||
425 | 406 | "ha-relation-changed": ha_relation_changed, | ||
426 | 407 | "upgrade-charm": upgrade_charm | ||
427 | 262 | } | 408 | } |
428 | 263 | 409 | ||
429 | 264 | # keystone-hooks gets called by symlink corresponding to the requested relation | 410 | # keystone-hooks gets called by symlink corresponding to the requested relation |
430 | 265 | # hook. | 411 | # hook. |
435 | 266 | arg0 = sys.argv[0].split("/").pop() | 412 | hook = os.path.basename(sys.argv[0]) |
436 | 267 | if arg0 not in hooks.keys(): | 413 | if hook not in hooks.keys(): |
437 | 268 | error_out("Unsupported hook: %s" % arg0) | 414 | error_out("Unsupported hook: %s" % hook) |
438 | 269 | hooks[arg0]() | 415 | hooks[hook]() |
439 | 270 | 416 | ||
440 | === modified file 'hooks/lib/openstack_common.py' | |||
441 | --- hooks/lib/openstack_common.py 2012-12-05 20:35:05 +0000 | |||
442 | +++ hooks/lib/openstack_common.py 2013-02-22 19:26:19 +0000 | |||
443 | @@ -3,6 +3,7 @@ | |||
444 | 3 | # Common python helper functions used for OpenStack charms. | 3 | # Common python helper functions used for OpenStack charms. |
445 | 4 | 4 | ||
446 | 5 | import subprocess | 5 | import subprocess |
447 | 6 | import os | ||
448 | 6 | 7 | ||
449 | 7 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" | 8 | CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu" |
450 | 8 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' | 9 | CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA' |
451 | @@ -22,6 +23,12 @@ | |||
452 | 22 | '2013.1': 'grizzly' | 23 | '2013.1': 'grizzly' |
453 | 23 | } | 24 | } |
454 | 24 | 25 | ||
455 | 26 | # The ugly duckling | ||
456 | 27 | swift_codenames = { | ||
457 | 28 | '1.4.3': 'diablo', | ||
458 | 29 | '1.4.8': 'essex', | ||
459 | 30 | '1.7.4': 'folsom' | ||
460 | 31 | } | ||
461 | 25 | 32 | ||
462 | 26 | def juju_log(msg): | 33 | def juju_log(msg): |
463 | 27 | subprocess.check_call(['juju-log', msg]) | 34 | subprocess.check_call(['juju-log', msg]) |
464 | @@ -118,12 +125,32 @@ | |||
465 | 118 | 125 | ||
466 | 119 | vers = vers[:6] | 126 | vers = vers[:6] |
467 | 120 | try: | 127 | try: |
469 | 121 | return openstack_codenames[vers] | 128 | if 'swift' in pkg: |
470 | 129 | vers = vers[:5] | ||
471 | 130 | return swift_codenames[vers] | ||
472 | 131 | else: | ||
473 | 132 | vers = vers[:6] | ||
474 | 133 | return openstack_codenames[vers] | ||
475 | 122 | except KeyError: | 134 | except KeyError: |
476 | 123 | e = 'Could not determine OpenStack codename for version %s' % vers | 135 | e = 'Could not determine OpenStack codename for version %s' % vers |
477 | 124 | error_out(e) | 136 | error_out(e) |
478 | 125 | 137 | ||
479 | 126 | 138 | ||
480 | 139 | def get_os_version_package(pkg): | ||
481 | 140 | '''Derive OpenStack version number from an installed package.''' | ||
482 | 141 | codename = get_os_codename_package(pkg) | ||
483 | 142 | |||
484 | 143 | if 'swift' in pkg: | ||
485 | 144 | vers_map = swift_codenames | ||
486 | 145 | else: | ||
487 | 146 | vers_map = openstack_codenames | ||
488 | 147 | |||
489 | 148 | for version, cname in vers_map.iteritems(): | ||
490 | 149 | if cname == codename: | ||
491 | 150 | return version | ||
492 | 151 | e = "Could not determine OpenStack version for package: %s" % pkg | ||
493 | 152 | error_out(e) | ||
494 | 153 | |||
495 | 127 | def configure_installation_source(rel): | 154 | def configure_installation_source(rel): |
496 | 128 | '''Configure apt installation source.''' | 155 | '''Configure apt installation source.''' |
497 | 129 | 156 | ||
498 | @@ -164,9 +191,11 @@ | |||
499 | 164 | 'version (%s)' % (ca_rel, ubuntu_rel) | 191 | 'version (%s)' % (ca_rel, ubuntu_rel) |
500 | 165 | error_out(e) | 192 | error_out(e) |
501 | 166 | 193 | ||
503 | 167 | if ca_rel == 'folsom/staging': | 194 | if 'staging' in ca_rel: |
504 | 168 | # staging is just a regular PPA. | 195 | # staging is just a regular PPA. |
506 | 169 | cmd = 'add-apt-repository -y ppa:ubuntu-cloud-archive/folsom-staging' | 196 | os_rel = ca_rel.split('/')[0] |
507 | 197 | ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel | ||
508 | 198 | cmd = 'add-apt-repository -y %s' % ppa | ||
509 | 170 | subprocess.check_call(cmd.split(' ')) | 199 | subprocess.check_call(cmd.split(' ')) |
510 | 171 | return | 200 | return |
511 | 172 | 201 | ||
512 | @@ -174,7 +203,10 @@ | |||
513 | 174 | pockets = { | 203 | pockets = { |
514 | 175 | 'folsom': 'precise-updates/folsom', | 204 | 'folsom': 'precise-updates/folsom', |
515 | 176 | 'folsom/updates': 'precise-updates/folsom', | 205 | 'folsom/updates': 'precise-updates/folsom', |
517 | 177 | 'folsom/proposed': 'precise-proposed/folsom' | 206 | 'folsom/proposed': 'precise-proposed/folsom', |
518 | 207 | 'grizzly': 'precise-updates/grizzly', | ||
519 | 208 | 'grizzly/updates': 'precise-updates/grizzly', | ||
520 | 209 | 'grizzly/proposed': 'precise-proposed/grizzly' | ||
521 | 178 | } | 210 | } |
522 | 179 | 211 | ||
523 | 180 | try: | 212 | try: |
524 | @@ -191,3 +223,39 @@ | |||
525 | 191 | else: | 223 | else: |
526 | 192 | error_out("Invalid openstack-release specified: %s" % rel) | 224 | error_out("Invalid openstack-release specified: %s" % rel) |
527 | 193 | 225 | ||
528 | 226 | HAPROXY_CONF = '/etc/haproxy/haproxy.cfg' | ||
529 | 227 | HAPROXY_DEFAULT = '/etc/default/haproxy' | ||
530 | 228 | |||
531 | 229 | def configure_haproxy(units, service_ports, template_dir=None): | ||
532 | 230 | template_dir = template_dir or 'templates' | ||
533 | 231 | import jinja2 | ||
534 | 232 | context = { | ||
535 | 233 | 'units': units, | ||
536 | 234 | 'service_ports': service_ports | ||
537 | 235 | } | ||
538 | 236 | templates = jinja2.Environment( | ||
539 | 237 | loader=jinja2.FileSystemLoader(template_dir) | ||
540 | 238 | ) | ||
541 | 239 | template = templates.get_template( | ||
542 | 240 | os.path.basename(HAPROXY_CONF) | ||
543 | 241 | ) | ||
544 | 242 | with open(HAPROXY_CONF, 'w') as f: | ||
545 | 243 | f.write(template.render(context)) | ||
546 | 244 | with open(HAPROXY_DEFAULT, 'w') as f: | ||
547 | 245 | f.write('ENABLED=1') | ||
548 | 246 | |||
549 | 247 | def save_script_rc(script_path="scriptrc", **env_vars): | ||
550 | 248 | """ | ||
551 | 249 | Write an rc file in the charm-delivered directory containing | ||
552 | 250 | exported environment variables provided by env_vars. Any charm scripts run | ||
553 | 251 | outside the juju hook environment can source this scriptrc to obtain | ||
554 | 252 | updated config information necessary to perform health checks or | ||
555 | 253 | service changes. | ||
556 | 254 | """ | ||
557 | 255 | unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-') | ||
558 | 256 | juju_rc_path="/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path) | ||
559 | 257 | with open(juju_rc_path, 'wb') as rc_script: | ||
560 | 258 | rc_script.write( | ||
561 | 259 | "#!/bin/bash\n") | ||
562 | 260 | [rc_script.write('export %s=%s\n' % (u, p)) | ||
563 | 261 | for u, p in env_vars.iteritems() if u != "script_path"] | ||
564 | 194 | 262 | ||
565 | === added symlink 'hooks/upgrade-charm' | |||
566 | === target is u'keystone-hooks' | |||
567 | === modified file 'hooks/utils.py' | |||
568 | --- hooks/utils.py 2012-12-12 03:52:41 +0000 | |||
569 | +++ hooks/utils.py 2013-02-22 19:26:19 +0000 | |||
570 | @@ -1,4 +1,5 @@ | |||
571 | 1 | #!/usr/bin/python | 1 | #!/usr/bin/python |
572 | 2 | import ConfigParser | ||
573 | 2 | import subprocess | 3 | import subprocess |
574 | 3 | import sys | 4 | import sys |
575 | 4 | import json | 5 | import json |
576 | @@ -10,9 +11,10 @@ | |||
577 | 10 | keystone_conf = "/etc/keystone/keystone.conf" | 11 | keystone_conf = "/etc/keystone/keystone.conf" |
578 | 11 | stored_passwd = "/var/lib/keystone/keystone.passwd" | 12 | stored_passwd = "/var/lib/keystone/keystone.passwd" |
579 | 12 | stored_token = "/var/lib/keystone/keystone.token" | 13 | stored_token = "/var/lib/keystone/keystone.token" |
580 | 14 | SERVICE_PASSWD_PATH = '/var/lib/keystone/services.passwd' | ||
581 | 13 | 15 | ||
582 | 14 | def execute(cmd, die=False, echo=False): | 16 | def execute(cmd, die=False, echo=False): |
584 | 15 | """ Executes a command | 17 | """ Executes a command |
585 | 16 | 18 | ||
586 | 17 | if die=True, script will exit(1) if command does not return 0 | 19 | if die=True, script will exit(1) if command does not return 0 |
587 | 18 | if echo=True, output of command will be printed to stdout | 20 | if echo=True, output of command will be printed to stdout |
588 | @@ -79,6 +81,20 @@ | |||
589 | 79 | for k in relation_data: | 81 | for k in relation_data: |
590 | 80 | execute("relation-set %s=%s" % (k, relation_data[k]), die=True) | 82 | execute("relation-set %s=%s" % (k, relation_data[k]), die=True) |
591 | 81 | 83 | ||
592 | 84 | def relation_set_2(**kwargs): | ||
593 | 85 | cmd = [ | ||
594 | 86 | 'relation-set' | ||
595 | 87 | ] | ||
596 | 88 | args = [] | ||
597 | 89 | for k, v in kwargs.items(): | ||
598 | 90 | if k == 'rid': | ||
599 | 91 | cmd.append('-r') | ||
600 | 92 | cmd.append(v) | ||
601 | 93 | else: | ||
602 | 94 | args.append('{}={}'.format(k, v)) | ||
603 | 95 | cmd += args | ||
604 | 96 | subprocess.check_call(cmd) | ||
605 | 97 | |||
606 | 82 | def relation_get(relation_data): | 98 | def relation_get(relation_data): |
607 | 83 | """ Obtain all current relation data | 99 | """ Obtain all current relation data |
608 | 84 | relation_data is a list of options to query from the relation | 100 | relation_data is a list of options to query from the relation |
609 | @@ -149,106 +165,26 @@ | |||
610 | 149 | keystone_conf) | 165 | keystone_conf) |
611 | 150 | error_out('Could not find admin_token line in %s' % keystone_conf) | 166 | error_out('Could not find admin_token line in %s' % keystone_conf) |
612 | 151 | 167 | ||
614 | 152 | def update_config_block(block, **kwargs): | 168 | def update_config_block(section, **kwargs): |
615 | 153 | """ Updates keystone.conf blocks given kwargs. | 169 | """ Updates keystone.conf blocks given kwargs. |
714 | 154 | Can be used to update driver settings for a particular backend, | 170 | Update a config setting in a specific setting of a config |
715 | 155 | setting the sql connection, etc. | 171 | file (/etc/keystone/keystone.conf, by default) |
716 | 156 | 172 | """ | |
717 | 157 | Parses block heading as '[block]' | 173 | if 'file' in kwargs: |
718 | 158 | 174 | conf_file = kwargs['file'] | |
719 | 159 | If block does not exist, a new block will be created at end of file with | 175 | del kwargs['file'] |
720 | 160 | given kwargs | 176 | else: |
721 | 161 | """ | 177 | conf_file = keystone_conf |
722 | 162 | f = open(keystone_conf, "r+") | 178 | config = ConfigParser.RawConfigParser() |
723 | 163 | orig = f.readlines() | 179 | config.read(conf_file) |
724 | 164 | new = [] | 180 | |
725 | 165 | found_block = "" | 181 | if section != 'DEFAULT' and not config.has_section(section): |
726 | 166 | heading = "[%s]\n" % block | 182 | config.add_section(section) |
727 | 167 | 183 | ||
728 | 168 | lines = len(orig) | 184 | for k, v in kwargs.iteritems(): |
729 | 169 | ln = 0 | 185 | config.set(section, k, v) |
730 | 170 | 186 | with open(conf_file, 'wb') as out: | |
731 | 171 | def update_block(block): | 187 | config.write(out) |
634 | 172 | for k, v in kwargs.iteritems(): | ||
635 | 173 | for l in block: | ||
636 | 174 | if l.strip().split(" ")[0] == k: | ||
637 | 175 | block[block.index(l)] = "%s = %s\n" % (k, v) | ||
638 | 176 | return | ||
639 | 177 | block.append('%s = %s\n' % (k, v)) | ||
640 | 178 | block.append('\n') | ||
641 | 179 | |||
642 | 180 | try: | ||
643 | 181 | found = False | ||
644 | 182 | while ln < lines: | ||
645 | 183 | if orig[ln] != heading: | ||
646 | 184 | new.append(orig[ln]) | ||
647 | 185 | ln += 1 | ||
648 | 186 | else: | ||
649 | 187 | new.append(orig[ln]) | ||
650 | 188 | ln += 1 | ||
651 | 189 | block = [] | ||
652 | 190 | while orig[ln].strip() != '': | ||
653 | 191 | block.append(orig[ln]) | ||
654 | 192 | ln += 1 | ||
655 | 193 | update_block(block) | ||
656 | 194 | new += block | ||
657 | 195 | found = True | ||
658 | 196 | |||
659 | 197 | if not found: | ||
660 | 198 | if new[(len(new) - 1)].strip() != '': | ||
661 | 199 | new.append('\n') | ||
662 | 200 | new.append('%s' % heading) | ||
663 | 201 | for k, v in kwargs.iteritems(): | ||
664 | 202 | new.append('%s = %s\n' % (k, v)) | ||
665 | 203 | new.append('\n') | ||
666 | 204 | except: | ||
667 | 205 | error_out('Error while attempting to update config block. '\ | ||
668 | 206 | 'Refusing to overwite existing config.') | ||
669 | 207 | |||
670 | 208 | return | ||
671 | 209 | |||
672 | 210 | # backup original config | ||
673 | 211 | backup = open(keystone_conf + '.juju-back', 'w+') | ||
674 | 212 | for l in orig: | ||
675 | 213 | backup.write(l) | ||
676 | 214 | backup.close() | ||
677 | 215 | |||
678 | 216 | # update config | ||
679 | 217 | f.seek(0) | ||
680 | 218 | f.truncate() | ||
681 | 219 | for l in new: | ||
682 | 220 | f.write(l) | ||
683 | 221 | |||
684 | 222 | |||
685 | 223 | def keystone_conf_update(opt, val): | ||
686 | 224 | """ Updates keystone.conf values | ||
687 | 225 | If option exists, it is reset to new value | ||
688 | 226 | If it does not, it added to the top of the config file after the [DEFAULT] | ||
689 | 227 | heading to keep it out of the paste deploy config | ||
690 | 228 | """ | ||
691 | 229 | f = open(keystone_conf, "r+") | ||
692 | 230 | orig = f.readlines() | ||
693 | 231 | new = "" | ||
694 | 232 | found = False | ||
695 | 233 | for l in orig: | ||
696 | 234 | if l.split(' ')[0] == opt: | ||
697 | 235 | juju_log("Updating %s, setting %s = %s" % (keystone_conf, opt, val)) | ||
698 | 236 | new += "%s = %s\n" % (opt, val) | ||
699 | 237 | found = True | ||
700 | 238 | else: | ||
701 | 239 | new += l | ||
702 | 240 | new = new.split('\n') | ||
703 | 241 | # insert a new value at the top of the file, after the 'DEFAULT' header so | ||
704 | 242 | # as not to muck up paste deploy configuration later in the file | ||
705 | 243 | if not found: | ||
706 | 244 | juju_log("Adding new config option %s = %s" % (opt, val)) | ||
707 | 245 | header = new.index("[DEFAULT]") | ||
708 | 246 | new.insert((header+1), "%s = %s" % (opt, val)) | ||
709 | 247 | f.seek(0) | ||
710 | 248 | f.truncate() | ||
711 | 249 | for l in new: | ||
712 | 250 | f.write("%s\n" % l) | ||
713 | 251 | f.close | ||
732 | 252 | 188 | ||
733 | 253 | def create_service_entry(service_name, service_type, service_desc, owner=None): | 189 | def create_service_entry(service_name, service_type, service_desc, owner=None): |
734 | 254 | """ Add a new service entry to keystone if one does not already exist """ | 190 | """ Add a new service entry to keystone if one does not already exist """ |
735 | @@ -264,8 +200,8 @@ | |||
736 | 264 | description=service_desc) | 200 | description=service_desc) |
737 | 265 | juju_log("Created new service entry '%s'" % service_name) | 201 | juju_log("Created new service entry '%s'" % service_name) |
738 | 266 | 202 | ||
741 | 267 | def create_endpoint_template(region, service, public_url, admin_url, | 203 | def create_endpoint_template(region, service, publicurl, adminurl, |
742 | 268 | internal_url): | 204 | internalurl): |
743 | 269 | """ Create a new endpoint template for service if one does not already | 205 | """ Create a new endpoint template for service if one does not already |
744 | 270 | exist matching name *and* region """ | 206 | exist matching name *and* region """ |
745 | 271 | import manager | 207 | import manager |
746 | @@ -276,13 +212,24 @@ | |||
747 | 276 | if ep['service_id'] == service_id and ep['region'] == region: | 212 | if ep['service_id'] == service_id and ep['region'] == region: |
748 | 277 | juju_log("Endpoint template already exists for '%s' in '%s'" | 213 | juju_log("Endpoint template already exists for '%s' in '%s'" |
749 | 278 | % (service, region)) | 214 | % (service, region)) |
751 | 279 | return | 215 | |
752 | 216 | up_to_date = True | ||
753 | 217 | for k in ['publicurl', 'adminurl', 'internalurl']: | ||
754 | 218 | if ep[k] != locals()[k]: | ||
755 | 219 | up_to_date = False | ||
756 | 220 | |||
757 | 221 | if up_to_date: | ||
758 | 222 | return | ||
759 | 223 | else: | ||
760 | 224 | # delete endpoint and recreate if endpoint urls need updating. | ||
761 | 225 | juju_log("Updating endpoint template with new endpoint urls.") | ||
762 | 226 | manager.api.endpoints.delete(ep['id']) | ||
763 | 280 | 227 | ||
764 | 281 | manager.api.endpoints.create(region=region, | 228 | manager.api.endpoints.create(region=region, |
765 | 282 | service_id=service_id, | 229 | service_id=service_id, |
769 | 283 | publicurl=public_url, | 230 | publicurl=publicurl, |
770 | 284 | adminurl=admin_url, | 231 | adminurl=adminurl, |
771 | 285 | internalurl=internal_url) | 232 | internalurl=internalurl) |
772 | 286 | juju_log("Created new endpoint template for '%s' in '%s'" % | 233 | juju_log("Created new endpoint template for '%s' in '%s'" % |
773 | 287 | (region, service)) | 234 | (region, service)) |
774 | 288 | 235 | ||
775 | @@ -411,12 +358,33 @@ | |||
776 | 411 | create_service_entry("keystone", "identity", "Keystone Identity Service") | 358 | create_service_entry("keystone", "identity", "Keystone Identity Service") |
777 | 412 | # following documentation here, perhaps we should be using juju | 359 | # following documentation here, perhaps we should be using juju |
778 | 413 | # public/private addresses for public/internal urls. | 360 | # public/private addresses for public/internal urls. |
783 | 414 | public_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"]) | 361 | if is_clustered(): |
784 | 415 | admin_url = "http://%s:%s/v2.0" % (config["hostname"], config["admin-port"]) | 362 | juju_log("Creating endpoint for clustered configuration") |
785 | 416 | internal_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"]) | 363 | for region in config['region'].split(): |
786 | 417 | create_endpoint_template("RegionOne", "keystone", public_url, | 364 | create_keystone_endpoint(service_host=config["vip"], |
787 | 365 | service_port=int(config["service-port"]) + 1, | ||
788 | 366 | auth_host=config["vip"], | ||
789 | 367 | auth_port=int(config["admin-port"]) + 1, | ||
790 | 368 | region=region) | ||
791 | 369 | else: | ||
792 | 370 | juju_log("Creating standard endpoint") | ||
793 | 371 | for region in config['region'].split(): | ||
794 | 372 | create_keystone_endpoint(service_host=config["hostname"], | ||
795 | 373 | service_port=config["service-port"], | ||
796 | 374 | auth_host=config["hostname"], | ||
797 | 375 | auth_port=config["admin-port"], | ||
798 | 376 | region=region) | ||
799 | 377 | |||
800 | 378 | |||
801 | 379 | def create_keystone_endpoint(service_host, service_port, | ||
802 | 380 | auth_host, auth_port, region): | ||
803 | 381 | public_url = "http://%s:%s/v2.0" % (service_host, service_port) | ||
804 | 382 | admin_url = "http://%s:%s/v2.0" % (auth_host, auth_port) | ||
805 | 383 | internal_url = "http://%s:%s/v2.0" % (service_host, service_port) | ||
806 | 384 | create_endpoint_template(region, "keystone", public_url, | ||
807 | 418 | admin_url, internal_url) | 385 | admin_url, internal_url) |
808 | 419 | 386 | ||
809 | 387 | |||
810 | 420 | def update_user_password(username, password): | 388 | def update_user_password(username, password): |
811 | 421 | import manager | 389 | import manager |
812 | 422 | manager = manager.KeystoneManager(endpoint='http://localhost:35357/v2.0/', | 390 | manager = manager.KeystoneManager(endpoint='http://localhost:35357/v2.0/', |
813 | @@ -430,6 +398,40 @@ | |||
814 | 430 | manager.api.users.update_password(user=user_id, password=password) | 398 | manager.api.users.update_password(user=user_id, password=password) |
815 | 431 | juju_log("Successfully updated password for user '%s'" % username) | 399 | juju_log("Successfully updated password for user '%s'" % username) |
816 | 432 | 400 | ||
817 | 401 | def load_stored_passwords(path=SERVICE_PASSWD_PATH): | ||
818 | 402 | creds = {} | ||
819 | 403 | if not os.path.isfile(path): | ||
820 | 404 | return creds | ||
821 | 405 | |||
822 | 406 | stored_passwd = open(path, 'r') | ||
823 | 407 | for l in stored_passwd.readlines(): | ||
824 | 408 | user, passwd = l.strip().split(':') | ||
825 | 409 | creds[user] = passwd | ||
826 | 410 | return creds | ||
827 | 411 | |||
828 | 412 | def save_stored_passwords(path=SERVICE_PASSWD_PATH, **creds): | ||
829 | 413 | with open(path, 'wb') as stored_passwd: | ||
830 | 414 | [stored_passwd.write('%s:%s\n' % (u, p)) for u, p in creds.iteritems()] | ||
831 | 415 | |||
832 | 416 | def get_service_password(service_username): | ||
833 | 417 | creds = load_stored_passwords() | ||
834 | 418 | if service_username in creds: | ||
835 | 419 | return creds[service_username] | ||
836 | 420 | |||
837 | 421 | passwd = subprocess.check_output(['pwgen', '-c', '32', '1']).strip() | ||
838 | 422 | creds[service_username] = passwd | ||
839 | 423 | save_stored_passwords(**creds) | ||
840 | 424 | |||
841 | 425 | return passwd | ||
842 | 426 | |||
843 | 427 | def configure_pki_tokens(config): | ||
844 | 428 | '''Configure PKI token signing, if enabled.''' | ||
845 | 429 | if config['enable-pki'] not in ['True', 'true']: | ||
846 | 430 | update_config_block('signing', token_format='UUID') | ||
847 | 431 | else: | ||
848 | 432 | juju_log('TODO: PKI Support, setting to UUID for now.') | ||
849 | 433 | update_config_block('signing', token_format='UUID') | ||
850 | 434 | |||
851 | 433 | 435 | ||
852 | 434 | def do_openstack_upgrade(install_src, packages): | 436 | def do_openstack_upgrade(install_src, packages): |
853 | 435 | '''Upgrade packages from a given install src.''' | 437 | '''Upgrade packages from a given install src.''' |
854 | @@ -474,9 +476,84 @@ | |||
855 | 474 | relation_data["private-address"], | 476 | relation_data["private-address"], |
856 | 475 | config["database"])) | 477 | config["database"])) |
857 | 476 | 478 | ||
858 | 477 | juju_log('Running database migrations for %s' % new_vers) | ||
859 | 478 | execute('service keystone stop', echo=True) | 479 | execute('service keystone stop', echo=True) |
861 | 479 | execute('keystone-manage db_sync', echo=True, die=True) | 480 | if ((is_clustered() and is_leader()) or |
862 | 481 | not is_clustered()): | ||
863 | 482 | juju_log('Running database migrations for %s' % new_vers) | ||
864 | 483 | execute('keystone-manage db_sync', echo=True, die=True) | ||
865 | 484 | else: | ||
866 | 485 | juju_log('Not cluster leader; snoozing whilst leader upgrades DB') | ||
867 | 486 | time.sleep(10) | ||
868 | 480 | execute('service keystone start', echo=True) | 487 | execute('service keystone start', echo=True) |
869 | 481 | time.sleep(5) | 488 | time.sleep(5) |
870 | 482 | juju_log('Completed Keystone upgrade: %s -> %s' % (old_vers, new_vers)) | 489 | juju_log('Completed Keystone upgrade: %s -> %s' % (old_vers, new_vers)) |
871 | 490 | |||
872 | 491 | |||
873 | 492 | def is_clustered(): | ||
874 | 493 | for r_id in (relation_ids('ha') or []): | ||
875 | 494 | for unit in (relation_list(r_id) or []): | ||
876 | 495 | relation_data = \ | ||
877 | 496 | relation_get_dict(relation_id=r_id, | ||
878 | 497 | remote_unit=unit) | ||
879 | 498 | if 'clustered' in relation_data: | ||
880 | 499 | return True | ||
881 | 500 | return False | ||
882 | 501 | |||
883 | 502 | |||
884 | 503 | def is_leader(): | ||
885 | 504 | status = execute('crm resource show res_ks_vip', echo=True)[0].strip() | ||
886 | 505 | hostname = execute('hostname', echo=True)[0].strip() | ||
887 | 506 | if hostname in status: | ||
888 | 507 | return True | ||
889 | 508 | else: | ||
890 | 509 | return False | ||
891 | 510 | |||
892 | 511 | |||
893 | 512 | def peer_units(): | ||
894 | 513 | peers = [] | ||
895 | 514 | for r_id in (relation_ids('cluster') or []): | ||
896 | 515 | for unit in (relation_list(r_id) or []): | ||
897 | 516 | peers.append(unit) | ||
898 | 517 | return peers | ||
899 | 518 | |||
900 | 519 | def oldest_peer(peers): | ||
901 | 520 | local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] | ||
902 | 521 | for peer in peers: | ||
903 | 522 | remote_unit_no = peer.split('/')[1] | ||
904 | 523 | if remote_unit_no < local_unit_no: | ||
905 | 524 | return False | ||
906 | 525 | return True | ||
907 | 526 | |||
908 | 527 | |||
909 | 528 | def eligible_leader(): | ||
910 | 529 | if is_clustered(): | ||
911 | 530 | if not is_leader(): | ||
912 | 531 | juju_log('Deferring action to CRM leader.') | ||
913 | 532 | return False | ||
914 | 533 | else: | ||
915 | 534 | peers = peer_units() | ||
916 | 535 | if peers and not oldest_peer(peers): | ||
917 | 536 | juju_log('Deferring action to oldest service unit.') | ||
918 | 537 | return False | ||
919 | 538 | return True | ||
920 | 539 | |||
921 | 540 | |||
922 | 541 | def synchronize_service_credentials(): | ||
923 | 542 | ''' | ||
924 | 543 | Broadcast service credentials to peers or consume those that have been | ||
925 | 544 | broadcasted by peer, depending on hook context. | ||
926 | 545 | ''' | ||
927 | 546 | if os.path.basename(sys.argv[0]) == 'cluster-relation-changed': | ||
928 | 547 | r_data = relation_get_dict() | ||
929 | 548 | if 'service_credentials' in r_data: | ||
930 | 549 | juju_log('Saving service passwords from peer.') | ||
931 | 550 | save_stored_passwords(**json.loads(r_data['service_credentials'])) | ||
932 | 551 | return | ||
933 | 552 | |||
934 | 553 | creds = load_stored_passwords() | ||
935 | 554 | if not creds: | ||
936 | 555 | return | ||
937 | 556 | juju_log('Synchronizing service passwords to all peers.') | ||
938 | 557 | creds = json.dumps(creds) | ||
939 | 558 | for r_id in (relation_ids('cluster') or []): | ||
940 | 559 | relation_set_2(rid=r_id, service_credentials=creds) | ||
941 | 483 | 560 | ||
942 | === modified file 'metadata.yaml' | |||
943 | --- metadata.yaml 2012-06-07 17:43:42 +0000 | |||
944 | +++ metadata.yaml 2013-02-22 19:26:19 +0000 | |||
945 | @@ -11,3 +11,9 @@ | |||
946 | 11 | requires: | 11 | requires: |
947 | 12 | shared-db: | 12 | shared-db: |
948 | 13 | interface: mysql-shared | 13 | interface: mysql-shared |
949 | 14 | ha: | ||
950 | 15 | interface: hacluster | ||
951 | 16 | scope: container | ||
952 | 17 | peers: | ||
953 | 18 | cluster: | ||
954 | 19 | interface: keystone-ha | ||
955 | 14 | 20 | ||
956 | === added file 'remove_from_cluster' | |||
957 | --- remove_from_cluster 1970-01-01 00:00:00 +0000 | |||
958 | +++ remove_from_cluster 2013-02-22 19:26:19 +0000 | |||
959 | @@ -0,0 +1,2 @@ | |||
960 | 1 | #!/bin/bash | ||
961 | 2 | crm node standby | ||
962 | 0 | 3 | ||
963 | === modified file 'revision' | |||
964 | --- revision 2012-12-12 03:52:01 +0000 | |||
965 | +++ revision 2013-02-22 19:26:19 +0000 | |||
966 | @@ -1,1 +1,1 @@ | |||
968 | 1 | 165 | 1 | 197 |
969 | 2 | 2 | ||
970 | === added directory 'templates' | |||
971 | === added file 'templates/haproxy.cfg' | |||
972 | --- templates/haproxy.cfg 1970-01-01 00:00:00 +0000 | |||
973 | +++ templates/haproxy.cfg 2013-02-22 19:26:19 +0000 | |||
974 | @@ -0,0 +1,35 @@ | |||
975 | 1 | global | ||
976 | 2 | log 127.0.0.1 local0 | ||
977 | 3 | log 127.0.0.1 local1 notice | ||
978 | 4 | maxconn 4096 | ||
979 | 5 | user haproxy | ||
980 | 6 | group haproxy | ||
981 | 7 | spread-checks 0 | ||
982 | 8 | |||
983 | 9 | defaults | ||
984 | 10 | log global | ||
985 | 11 | mode http | ||
986 | 12 | option httplog | ||
987 | 13 | option dontlognull | ||
988 | 14 | retries 3 | ||
989 | 15 | timeout queue 1000 | ||
990 | 16 | timeout connect 1000 | ||
991 | 17 | timeout client 10000 | ||
992 | 18 | timeout server 10000 | ||
993 | 19 | |||
994 | 20 | listen stats :8888 | ||
995 | 21 | mode http | ||
996 | 22 | stats enable | ||
997 | 23 | stats hide-version | ||
998 | 24 | stats realm Haproxy\ Statistics | ||
999 | 25 | stats uri / | ||
1000 | 26 | stats auth admin:password | ||
1001 | 27 | |||
1002 | 28 | {% for service, port in service_ports.iteritems() -%} | ||
1003 | 29 | listen {{ service }} 0.0.0.0:{{ port }} | ||
1004 | 30 | balance roundrobin | ||
1005 | 31 | option tcplog | ||
1006 | 32 | {% for unit, address in units.iteritems() -%} | ||
1007 | 33 | server {{ unit }} {{ address }}:{{ port - 1 }} check | ||
1008 | 34 | {% endfor %} | ||
1009 | 35 | {% endfor %} |
Hey Chad-
You targeted this merge into the upstream charm @ lp:charms/keystone, can you retarget toward our WIP HA branch @ lp:~openstack-charmers/charms/precise/keystone/ha-support ?
The general approach here LGTM. But I'd love for things to be made a bit more generic so we can just drop this work into other charms unmodified:
- Perhaps save_script_rc() can to be moved to lib/openstack_ common. py without the hard-coded KEYSTONE_ SERVICE_ NAME, and the caller can add that to the kwargs instead?
- I imagine there are some tests that would be common across all charms (service_running, service_ports_live) It would be great if those can be synced to other charms unmodified as well. Eg, perhaps have the ports look for *_OPENSTACK_PORT in the environment and ensure each one? The services check can determine what it should check based on *_OPENSTACK_ SERVICE_ NAME? Not sure if these common tests should live in some directory other than the tests specific to each service. /var/lib/ juju/units/ $unit/healthche cks/common. d/ & /var/lib/ juju/units/ $unit/healthche cks/$unit. d/ ??
End goal for me is to have the common framework for the health checking live with the other common code we maintain, and we can just sync it to other charms when we get it right in one. Of course this is complicated by the combination of bash + python charms we currently maintain, but still doable I think.