Merge lp:~chad.smith/charms/precise/keystone/ha-support into lp:~charmers/charms/precise/keystone/trunk

Proposed by Chad Smith
Status: Superseded
Proposed branch: lp:~chad.smith/charms/precise/keystone/ha-support
Merge into: lp:~charmers/charms/precise/keystone/trunk
Diff against target: 1009 lines (+554/-155)
11 files modified
add_to_cluster (+2/-0)
config.yaml (+37/-0)
health_checks.d/service_ports_live (+13/-0)
health_checks.d/service_running (+13/-0)
hooks/keystone-hooks (+184/-38)
hooks/lib/openstack_common.py (+72/-4)
hooks/utils.py (+189/-112)
metadata.yaml (+6/-0)
remove_from_cluster (+2/-0)
revision (+1/-1)
templates/haproxy.cfg (+35/-0)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/keystone/ha-support
Reviewer Review Type Date Requested Status
Adam Gandelman Pending
Review via email: mp+149883@code.launchpad.net

This proposal has been superseded by a proposal from 2013-02-22.

Description of the change

This is a merge proposal to establish a potential template for health_script.d, add_to_cluster and remove_from_cluster delivery within the ha-supported openstack charms.

This is intended as a talking point and guidance for what landscape-client will expect on openstack charms supporting an HA services. If there is anything that doesn't look acceptable, I'm game for a change but wanted to pull a template together to seed discussions. Any suggestions about better charm-related ways.

These script locations will be hard-coded within landscape-client and will be run during rolling Openstack upgrades. So, prior to ODS, we'd like to make sure this process makes sense for most haclustered services.

The example of landscape-client's interaction with charm scripts is our HAServiceManager plugin at https://code.launchpad.net/~chad.smith/landscape-client/ha-manager-skeleton/+merge/148593

1. Before any packages are upgraded, landscape-client will only call /var/lib/juju/units/<unit_name>/charm/remove_from_cluster.
    - remove_from_cluster should will place the node in crm standby state, thereby migrating any active HA resources off of the current node.
2. landscape-client will upgrade any packages and any additional package dependencies will be installed.
3. Upon successful upgrade (maybe involving a reboot), landscape-client will run all run-parts health scripts from /var/lib/juju/units/<unit_name>/charm/health_scripts.d. These health scripts will need to return success 0 or fail (non-zero) exit codes to indicate any issues.
4. If all health scripts succeed, landscape-client will run /var/lib/juju/units/<unit_name>/charm/add_to_cluster to bring the node online in the HA cluster again.

  As James mentioned in email, remove_from_cluster script might take more action to cleanly remove an API service from openstack schedulers or configs. If a charm's remove_from_cluster does more than just migrate HA resources away from the node, then we'll need to discuss how to potentially decompose add_to_cluster functionality into separate scripts. One script that allows us to ensure a local openstack API is running locally and can be "health checked" and a separate script add_to_cluster which only finalizes and activates configuration in the HA service.

To post a comment you must log in.
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Hey Chad-

You targeted this merge into the upstream charm @ lp:charms/keystone, can you retarget toward our WIP HA branch @ lp:~openstack-charmers/charms/precise/keystone/ha-support ?

The general approach here LGTM. But I'd love for things to be made a bit more generic so we can just drop this work into other charms unmodified:

- Perhaps save_script_rc() can to be moved to lib/openstack_common.py without the hard-coded KEYSTONE_SERVICE_NAME, and the caller can add that to the kwargs instead?

- I imagine there are some tests that would be common across all charms (service_running, service_ports_live) It would be great if those can be synced to other charms unmodified as well. Eg, perhaps have the ports look for *_OPENSTACK_PORT in the environment and ensure each one? The services check can determine what it should check based on *_OPENSTACK_SERVICE_NAME? Not sure if these common tests should live in some directory other than the tests specific to each service. /var/lib/juju/units/$unit/healthchecks/common.d/ & /var/lib/juju/units/$unit/healthchecks/$unit.d/ ??

End goal for me is to have the common framework for the health checking live with the other common code we maintain, and we can just sync it to other charms when we get it right in one. Of course this is complicated by the combination of bash + python charms we currently maintain, but still doable I think.

Revision history for this message
Chad Smith (chad.smith) wrote :

Good points Adam. I'll make save_script_rc more generic and the health scripts will be a bit smarter to look for *OPENSTACK* vars where necessary.

I'm pulled onto something else at the moment, but I'll repost this review against the right source branch and get a change to you either today or tomorrow morning.

56. By Chad Smith

move now-generic save_script_rc to lib/openstack_common.py. Update health scripts to be more flexible based on OPENSTACK_PORT* and OPENSTACK_SERVICE* environment variables

57. By Chad Smith

move add_cluster, remove_from_cluster and health_checks.d to a new scripts subdir

58. By Chad Smith

more strict netstat port matching

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'add_to_cluster'
--- add_to_cluster 1970-01-01 00:00:00 +0000
+++ add_to_cluster 2013-02-22 19:26:19 +0000
@@ -0,0 +1,2 @@
1#!/bin/bash
2crm node online
03
=== modified file 'config.yaml'
--- config.yaml 2012-10-12 17:26:48 +0000
+++ config.yaml 2013-02-22 19:26:19 +0000
@@ -26,6 +26,10 @@
26 default: "/etc/keystone/keystone.conf"26 default: "/etc/keystone/keystone.conf"
27 type: string27 type: string
28 description: "Location of keystone configuration file"28 description: "Location of keystone configuration file"
29 log-level:
30 default: WARNING
31 type: string
32 description: Log level (WARNING, INFO, DEBUG, ERROR)
29 service-port:33 service-port:
30 default: 500034 default: 5000
31 type: int35 type: int
@@ -75,3 +79,36 @@
75 default: "keystone"79 default: "keystone"
76 type: string80 type: string
77 description: "Database username"81 description: "Database username"
82 region:
83 default: RegionOne
84 type: string
85 description: "OpenStack Region(s) - separate multiple regions with single space"
86 # HA configuration settings
87 vip:
88 type: string
89 description: "Virtual IP to use to front keystone in ha configuration"
90 vip_iface:
91 type: string
92 default: eth0
93 description: "Network Interface where to place the Virtual IP"
94 vip_cidr:
95 type: int
96 default: 24
97 description: "Netmask that will be used for the Virtual IP"
98 ha-bindiface:
99 type: string
100 default: eth0
101 description: |
102 Default network interface on which HA cluster will bind to communication
103 with the other members of the HA Cluster.
104 ha-mcastport:
105 type: int
106 default: 5403
107 description: |
108 Default multicast port number that will be used to communicate between
109 HA Cluster nodes.
110 # PKI enablement and configuration (Grizzly and beyond)
111 enable-pki:
112 default: "false"
113 type: string
114 description: "Enable PKI token signing (Grizzly and beyond)"
78115
=== added directory 'health_checks.d'
=== added file 'health_checks.d/service_ports_live'
--- health_checks.d/service_ports_live 1970-01-01 00:00:00 +0000
+++ health_checks.d/service_ports_live 2013-02-22 19:26:19 +0000
@@ -0,0 +1,13 @@
1#!/bin/bash
2# Validate that service ports are active
3HEALTH_DIR=`dirname $0`
4UNIT_DIR=`dirname $HEALTH_DIR`
5. $UNIT_DIR/scriptrc
6set -e
7
8# Grab any OPENSTACK_PORT* environment variables
9openstack_ports=`env| awk -F '=' '(/OPENSTACK_PORT/){print $2}'`
10for port in $openstack_ports
11do
12 netstat -ln | grep -q $port
13done
014
=== added file 'health_checks.d/service_running'
--- health_checks.d/service_running 1970-01-01 00:00:00 +0000
+++ health_checks.d/service_running 2013-02-22 19:26:19 +0000
@@ -0,0 +1,13 @@
1#!/bin/bash
2# Validate that service is running
3HEALTH_DIR=`dirname $0`
4UNIT_DIR=`dirname $HEALTH_DIR`
5. $UNIT_DIR/scriptrc
6set -e
7
8# Grab any OPENSTACK_SERVICE* environment variables
9openstack_service_names=`env| awk -F '=' '(/OPENSTACK_SERVICE/){print $2}'`
10for service_name in $openstack_service_names
11do
12 service $service_name status | grep -q running
13done
014
=== added symlink 'hooks/cluster-relation-changed'
=== target is u'keystone-hooks'
=== added symlink 'hooks/cluster-relation-departed'
=== target is u'keystone-hooks'
=== added symlink 'hooks/ha-relation-changed'
=== target is u'keystone-hooks'
=== added symlink 'hooks/ha-relation-joined'
=== target is u'keystone-hooks'
=== modified file 'hooks/keystone-hooks'
--- hooks/keystone-hooks 2012-12-12 03:52:01 +0000
+++ hooks/keystone-hooks 2013-02-22 19:26:19 +0000
@@ -8,7 +8,7 @@
88
9config = config_get()9config = config_get()
1010
11packages = "keystone python-mysqldb pwgen"11packages = "keystone python-mysqldb pwgen haproxy python-jinja2"
12service = "keystone"12service = "keystone"
1313
14# used to verify joined services are valid openstack components.14# used to verify joined services are valid openstack components.
@@ -46,6 +46,14 @@
46 "quantum": {46 "quantum": {
47 "type": "network",47 "type": "network",
48 "desc": "Quantum Networking Service"48 "desc": "Quantum Networking Service"
49 },
50 "oxygen": {
51 "type": "oxygen",
52 "desc": "Oxygen Cloud Image Service"
53 },
54 "ceilometer": {
55 "type": "metering",
56 "desc": "Ceilometer Metering Service"
49 }57 }
50}58}
5159
@@ -69,12 +77,14 @@
69 driver='keystone.token.backends.sql.Token')77 driver='keystone.token.backends.sql.Token')
70 update_config_block('ec2',78 update_config_block('ec2',
71 driver='keystone.contrib.ec2.backends.sql.Ec2')79 driver='keystone.contrib.ec2.backends.sql.Ec2')
80
72 execute("service keystone stop", echo=True)81 execute("service keystone stop", echo=True)
73 execute("keystone-manage db_sync")82 execute("keystone-manage db_sync")
74 execute("service keystone start", echo=True)83 execute("service keystone start", echo=True)
75 time.sleep(5)84 time.sleep(5)
76 ensure_initial_admin(config)85 ensure_initial_admin(config)
7786
87
78def db_joined():88def db_joined():
79 relation_data = { "database": config["database"],89 relation_data = { "database": config["database"],
80 "username": config["database-user"],90 "username": config["database-user"],
@@ -84,15 +94,22 @@
84def db_changed():94def db_changed():
85 relation_data = relation_get_dict()95 relation_data = relation_get_dict()
86 if ('password' not in relation_data or96 if ('password' not in relation_data or
87 'private-address' not in relation_data):97 'db_host' not in relation_data):
88 juju_log("private-address or password not set. Peer not ready, exit 0")98 juju_log("db_host or password not set. Peer not ready, exit 0")
89 exit(0)99 exit(0)
90 update_config_block('sql', connection="mysql://%s:%s@%s/%s" %100 update_config_block('sql', connection="mysql://%s:%s@%s/%s" %
91 (config["database-user"],101 (config["database-user"],
92 relation_data["password"],102 relation_data["password"],
93 relation_data["private-address"],103 relation_data["db_host"],
94 config["database"]))104 config["database"]))
105
95 execute("service keystone stop", echo=True)106 execute("service keystone stop", echo=True)
107
108 if not eligible_leader():
109 juju_log('Deferring DB initialization to service leader.')
110 execute("service keystone start")
111 return
112
96 execute("keystone-manage db_sync", echo=True)113 execute("keystone-manage db_sync", echo=True)
97 execute("service keystone start")114 execute("service keystone start")
98 time.sleep(5)115 time.sleep(5)
@@ -124,18 +141,30 @@
124 realtion_set({ "admin_token": -1 })141 realtion_set({ "admin_token": -1 })
125 return142 return
126143
127 def add_endpoint(region, service, public_url, admin_url, internal_url):144 def add_endpoint(region, service, publicurl, adminurl, internalurl):
128 desc = valid_services[service]["desc"]145 desc = valid_services[service]["desc"]
129 service_type = valid_services[service]["type"]146 service_type = valid_services[service]["type"]
130 create_service_entry(service, service_type, desc)147 create_service_entry(service, service_type, desc)
131 create_endpoint_template(region=region, service=service,148 create_endpoint_template(region=region, service=service,
132 public_url=public_url,149 publicurl=publicurl,
133 admin_url=admin_url,150 adminurl=adminurl,
134 internal_url=internal_url)151 internalurl=internalurl)
152
153 if not eligible_leader():
154 juju_log('Deferring identity_changed() to service leader.')
155 return
135156
136 settings = relation_get_dict(relation_id=relation_id,157 settings = relation_get_dict(relation_id=relation_id,
137 remote_unit=remote_unit)158 remote_unit=remote_unit)
138159
160 # Allow the remote service to request creation of any additional roles.
161 # Currently used by Swift.
162 if 'requested_roles' in settings and settings['requested_roles'] != 'None':
163 roles = settings['requested_roles'].split(',')
164 juju_log("Creating requested roles: %s" % roles)
165 for role in roles:
166 create_role(role, user=config['admin-user'], tenant='admin')
167
139 # the minimum settings needed per endpoint168 # the minimum settings needed per endpoint
140 single = set(['service', 'region', 'public_url', 'admin_url',169 single = set(['service', 'region', 'public_url', 'admin_url',
141 'internal_url'])170 'internal_url'])
@@ -145,13 +174,27 @@
145 if 'None' in [v for k,v in settings.iteritems()]:174 if 'None' in [v for k,v in settings.iteritems()]:
146 # Some backend services advertise no endpoint but require a175 # Some backend services advertise no endpoint but require a
147 # hook execution to update auth strategy.176 # hook execution to update auth strategy.
177 relation_data = {}
178 # Check if clustered and use vip + haproxy ports if so
179 if is_clustered():
180 relation_data["auth_host"] = config['vip']
181 relation_data["auth_port"] = SERVICE_PORTS['keystone_admin']
182 relation_data["service_host"] = config['vip']
183 relation_data["service_port"] = SERVICE_PORTS['keystone_service']
184 else:
185 relation_data["auth_host"] = config['hostname']
186 relation_data["auth_port"] = config['auth-port']
187 relation_data["service_host"] = config['hostname']
188 relation_data["service_port"] = config['service-port']
189 relation_set(relation_data)
148 return190 return
149191
192
150 ensure_valid_service(settings['service'])193 ensure_valid_service(settings['service'])
151 add_endpoint(region=settings['region'], service=settings['service'],194 add_endpoint(region=settings['region'], service=settings['service'],
152 public_url=settings['public_url'],195 publicurl=settings['public_url'],
153 admin_url=settings['admin_url'],196 adminurl=settings['admin_url'],
154 internal_url=settings['internal_url'])197 internalurl=settings['internal_url'])
155 service_username = settings['service']198 service_username = settings['service']
156 else:199 else:
157 # assemble multiple endpoints from relation data. service name200 # assemble multiple endpoints from relation data. service name
@@ -186,9 +229,9 @@
186 ep = endpoints[ep]229 ep = endpoints[ep]
187 ensure_valid_service(ep['service'])230 ensure_valid_service(ep['service'])
188 add_endpoint(region=ep['region'], service=ep['service'],231 add_endpoint(region=ep['region'], service=ep['service'],
189 public_url=ep['public_url'],232 publicurl=ep['public_url'],
190 admin_url=ep['admin_url'],233 adminurl=ep['admin_url'],
191 internal_url=ep['internal_url'])234 internalurl=ep['internal_url'])
192 services.append(ep['service'])235 services.append(ep['service'])
193 service_username = '_'.join(services)236 service_username = '_'.join(services)
194237
@@ -201,26 +244,10 @@
201 token = get_admin_token()244 token = get_admin_token()
202 juju_log("Creating service credentials for '%s'" % service_username)245 juju_log("Creating service credentials for '%s'" % service_username)
203246
204 stored_passwd = '/var/lib/keystone/%s.passwd' % service_username247 service_password = get_service_password(service_username)
205 if os.path.isfile(stored_passwd):
206 juju_log("Loading stored service passwd from %s" % stored_passwd)
207 service_password = open(stored_passwd, 'r').readline().strip('\n')
208 else:
209 juju_log("Generating a new service password for %s" % service_username)
210 service_password = execute('pwgen -c 32 1', die=True)[0].strip()
211 open(stored_passwd, 'w+').writelines("%s\n" % service_password)
212
213 create_user(service_username, service_password, config['service-tenant'])248 create_user(service_username, service_password, config['service-tenant'])
214 grant_role(service_username, config['admin-role'], config['service-tenant'])249 grant_role(service_username, config['admin-role'], config['service-tenant'])
215250
216 # Allow the remote service to request creation of any additional roles.
217 # Currently used by Swift.
218 if 'requested_roles' in settings:
219 roles = settings['requested_roles'].split(',')
220 juju_log("Creating requested roles: %s" % roles)
221 for role in roles:
222 create_role(role, user=config['admin-user'], tenant='admin')
223
224 # As of https://review.openstack.org/#change,4675, all nodes hosting251 # As of https://review.openstack.org/#change,4675, all nodes hosting
225 # an endpoint(s) needs a service username and password assigned to252 # an endpoint(s) needs a service username and password assigned to
226 # the service tenant and granted admin role.253 # the service tenant and granted admin role.
@@ -237,7 +264,15 @@
237 "service_password": service_password,264 "service_password": service_password,
238 "service_tenant": config['service-tenant']265 "service_tenant": config['service-tenant']
239 }266 }
267 # Check if clustered and use vip + haproxy ports if so
268 if is_clustered():
269 relation_data["auth_host"] = config['vip']
270 relation_data["auth_port"] = SERVICE_PORTS['keystone_admin']
271 relation_data["service_host"] = config['vip']
272 relation_data["service_port"] = SERVICE_PORTS['keystone_service']
273
240 relation_set(relation_data)274 relation_set(relation_data)
275 synchronize_service_credentials()
241276
242def config_changed():277def config_changed():
243278
@@ -246,11 +281,117 @@
246 available = get_os_codename_install_source(config['openstack-origin'])281 available = get_os_codename_install_source(config['openstack-origin'])
247 installed = get_os_codename_package('keystone')282 installed = get_os_codename_package('keystone')
248283
249 if get_os_version_codename(available) > get_os_version_codename(installed):284 if (available and
285 get_os_version_codename(available) > get_os_version_codename(installed)):
250 do_openstack_upgrade(config['openstack-origin'], packages)286 do_openstack_upgrade(config['openstack-origin'], packages)
251287
288 env_vars = {'OPENSTACK_SERVICE_KEYSTONE': 'keystone',
289 'OPENSTACK_PORT_ADMIN': config['admin-port'],
290 'OPENSTACK_PORT_PUBLIC': config['service-port']}
291 save_script_rc(**env_vars)
292
252 set_admin_token(config['admin-token'])293 set_admin_token(config['admin-token'])
253 ensure_initial_admin(config)294
295 if eligible_leader():
296 juju_log('Cluster leader - ensuring endpoint configuration is up to date')
297 ensure_initial_admin(config)
298
299 update_config_block('logger_root', level=config['log-level'],
300 file='/etc/keystone/logging.conf')
301 if get_os_version_package('keystone') >= '2013.1':
302 # PKI introduced in Grizzly
303 configure_pki_tokens(config)
304
305 execute("service keystone restart", echo=True)
306 cluster_changed()
307
308
309def upgrade_charm():
310 cluster_changed()
311 if eligible_leader():
312 juju_log('Cluster leader - ensuring endpoint configuration is up to date')
313 ensure_initial_admin(config)
314
315
316SERVICE_PORTS = {
317 "keystone_admin": int(config['admin-port']) + 1,
318 "keystone_service": int(config['service-port']) + 1
319 }
320
321
322def cluster_changed():
323 cluster_hosts = {}
324 cluster_hosts['self'] = config['hostname']
325 for r_id in relation_ids('cluster'):
326 for unit in relation_list(r_id):
327 cluster_hosts[unit.replace('/','-')] = \
328 relation_get_dict(relation_id=r_id,
329 remote_unit=unit)['private-address']
330 configure_haproxy(cluster_hosts,
331 SERVICE_PORTS)
332
333 synchronize_service_credentials()
334
335
336def ha_relation_changed():
337 relation_data = relation_get_dict()
338 if ('clustered' in relation_data and
339 is_leader()):
340 juju_log('Cluster configured, notifying other services and updating'
341 'keystone endpoint configuration')
342 # Update keystone endpoint to point at VIP
343 ensure_initial_admin(config)
344 # Tell all related services to start using
345 # the VIP and haproxy ports instead
346 for r_id in relation_ids('identity-service'):
347 relation_set_2(rid=r_id,
348 auth_host=config['vip'],
349 service_host=config['vip'],
350 service_port=SERVICE_PORTS['keystone_service'],
351 auth_port=SERVICE_PORTS['keystone_admin'])
352
353
354def ha_relation_joined():
355 # Obtain the config values necessary for the cluster config. These
356 # include multicast port and interface to bind to.
357 corosync_bindiface = config['ha-bindiface']
358 corosync_mcastport = config['ha-mcastport']
359
360 # Obtain resources
361 resources = {
362 'res_ks_vip':'ocf:heartbeat:IPaddr2',
363 'res_ks_haproxy':'lsb:haproxy'
364 }
365 # TODO: Obtain netmask and nic where to place VIP.
366 resource_params = {
367 'res_ks_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' % (config['vip'],
368 config['vip_cidr'], config['vip_iface']),
369 'res_ks_haproxy':'op monitor interval="5s"'
370 }
371 init_services = {
372 'res_ks_haproxy':'haproxy'
373 }
374 groups = {
375 'grp_ks_haproxy':'res_ks_vip res_ks_haproxy'
376 }
377 #clones = {
378 # 'cln_ks_haproxy':'res_ks_haproxy meta globally-unique="false" interleave="true"'
379 # }
380
381 #orders = {
382 # 'ord_vip_before_haproxy':'inf: res_ks_vip res_ks_haproxy'
383 # }
384 #colocations = {
385 # 'col_vip_on_haproxy':'inf: res_ks_haproxy res_ks_vip'
386 # }
387
388 relation_set_2(init_services=init_services,
389 corosync_bindiface=corosync_bindiface,
390 corosync_mcastport=corosync_mcastport,
391 resources=resources,
392 resource_params=resource_params,
393 groups=groups)
394
254395
255hooks = {396hooks = {
256 "install": install_hook,397 "install": install_hook,
@@ -258,12 +399,17 @@
258 "shared-db-relation-changed": db_changed,399 "shared-db-relation-changed": db_changed,
259 "identity-service-relation-joined": identity_joined,400 "identity-service-relation-joined": identity_joined,
260 "identity-service-relation-changed": identity_changed,401 "identity-service-relation-changed": identity_changed,
261 "config-changed": config_changed402 "config-changed": config_changed,
403 "cluster-relation-changed": cluster_changed,
404 "cluster-relation-departed": cluster_changed,
405 "ha-relation-joined": ha_relation_joined,
406 "ha-relation-changed": ha_relation_changed,
407 "upgrade-charm": upgrade_charm
262}408}
263409
264# keystone-hooks gets called by symlink corresponding to the requested relation410# keystone-hooks gets called by symlink corresponding to the requested relation
265# hook.411# hook.
266arg0 = sys.argv[0].split("/").pop()412hook = os.path.basename(sys.argv[0])
267if arg0 not in hooks.keys():413if hook not in hooks.keys():
268 error_out("Unsupported hook: %s" % arg0)414 error_out("Unsupported hook: %s" % hook)
269hooks[arg0]()415hooks[hook]()
270416
=== modified file 'hooks/lib/openstack_common.py'
--- hooks/lib/openstack_common.py 2012-12-05 20:35:05 +0000
+++ hooks/lib/openstack_common.py 2013-02-22 19:26:19 +0000
@@ -3,6 +3,7 @@
3# Common python helper functions used for OpenStack charms.3# Common python helper functions used for OpenStack charms.
44
5import subprocess5import subprocess
6import os
67
7CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"8CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
8CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'9CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
@@ -22,6 +23,12 @@
22 '2013.1': 'grizzly'23 '2013.1': 'grizzly'
23}24}
2425
26# The ugly duckling
27swift_codenames = {
28 '1.4.3': 'diablo',
29 '1.4.8': 'essex',
30 '1.7.4': 'folsom'
31}
2532
26def juju_log(msg):33def juju_log(msg):
27 subprocess.check_call(['juju-log', msg])34 subprocess.check_call(['juju-log', msg])
@@ -118,12 +125,32 @@
118125
119 vers = vers[:6]126 vers = vers[:6]
120 try:127 try:
121 return openstack_codenames[vers]128 if 'swift' in pkg:
129 vers = vers[:5]
130 return swift_codenames[vers]
131 else:
132 vers = vers[:6]
133 return openstack_codenames[vers]
122 except KeyError:134 except KeyError:
123 e = 'Could not determine OpenStack codename for version %s' % vers135 e = 'Could not determine OpenStack codename for version %s' % vers
124 error_out(e)136 error_out(e)
125137
126138
139def get_os_version_package(pkg):
140 '''Derive OpenStack version number from an installed package.'''
141 codename = get_os_codename_package(pkg)
142
143 if 'swift' in pkg:
144 vers_map = swift_codenames
145 else:
146 vers_map = openstack_codenames
147
148 for version, cname in vers_map.iteritems():
149 if cname == codename:
150 return version
151 e = "Could not determine OpenStack version for package: %s" % pkg
152 error_out(e)
153
127def configure_installation_source(rel):154def configure_installation_source(rel):
128 '''Configure apt installation source.'''155 '''Configure apt installation source.'''
129156
@@ -164,9 +191,11 @@
164 'version (%s)' % (ca_rel, ubuntu_rel)191 'version (%s)' % (ca_rel, ubuntu_rel)
165 error_out(e)192 error_out(e)
166193
167 if ca_rel == 'folsom/staging':194 if 'staging' in ca_rel:
168 # staging is just a regular PPA.195 # staging is just a regular PPA.
169 cmd = 'add-apt-repository -y ppa:ubuntu-cloud-archive/folsom-staging'196 os_rel = ca_rel.split('/')[0]
197 ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
198 cmd = 'add-apt-repository -y %s' % ppa
170 subprocess.check_call(cmd.split(' '))199 subprocess.check_call(cmd.split(' '))
171 return200 return
172201
@@ -174,7 +203,10 @@
174 pockets = {203 pockets = {
175 'folsom': 'precise-updates/folsom',204 'folsom': 'precise-updates/folsom',
176 'folsom/updates': 'precise-updates/folsom',205 'folsom/updates': 'precise-updates/folsom',
177 'folsom/proposed': 'precise-proposed/folsom'206 'folsom/proposed': 'precise-proposed/folsom',
207 'grizzly': 'precise-updates/grizzly',
208 'grizzly/updates': 'precise-updates/grizzly',
209 'grizzly/proposed': 'precise-proposed/grizzly'
178 }210 }
179211
180 try:212 try:
@@ -191,3 +223,39 @@
191 else:223 else:
192 error_out("Invalid openstack-release specified: %s" % rel)224 error_out("Invalid openstack-release specified: %s" % rel)
193225
226HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
227HAPROXY_DEFAULT = '/etc/default/haproxy'
228
229def configure_haproxy(units, service_ports, template_dir=None):
230 template_dir = template_dir or 'templates'
231 import jinja2
232 context = {
233 'units': units,
234 'service_ports': service_ports
235 }
236 templates = jinja2.Environment(
237 loader=jinja2.FileSystemLoader(template_dir)
238 )
239 template = templates.get_template(
240 os.path.basename(HAPROXY_CONF)
241 )
242 with open(HAPROXY_CONF, 'w') as f:
243 f.write(template.render(context))
244 with open(HAPROXY_DEFAULT, 'w') as f:
245 f.write('ENABLED=1')
246
247def save_script_rc(script_path="scriptrc", **env_vars):
248 """
249 Write an rc file in the charm-delivered directory containing
250 exported environment variables provided by env_vars. Any charm scripts run
251 outside the juju hook environment can source this scriptrc to obtain
252 updated config information necessary to perform health checks or
253 service changes.
254 """
255 unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-')
256 juju_rc_path="/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path)
257 with open(juju_rc_path, 'wb') as rc_script:
258 rc_script.write(
259 "#!/bin/bash\n")
260 [rc_script.write('export %s=%s\n' % (u, p))
261 for u, p in env_vars.iteritems() if u != "script_path"]
194262
=== added symlink 'hooks/upgrade-charm'
=== target is u'keystone-hooks'
=== modified file 'hooks/utils.py'
--- hooks/utils.py 2012-12-12 03:52:41 +0000
+++ hooks/utils.py 2013-02-22 19:26:19 +0000
@@ -1,4 +1,5 @@
1#!/usr/bin/python1#!/usr/bin/python
2import ConfigParser
2import subprocess3import subprocess
3import sys4import sys
4import json5import json
@@ -10,9 +11,10 @@
10keystone_conf = "/etc/keystone/keystone.conf"11keystone_conf = "/etc/keystone/keystone.conf"
11stored_passwd = "/var/lib/keystone/keystone.passwd"12stored_passwd = "/var/lib/keystone/keystone.passwd"
12stored_token = "/var/lib/keystone/keystone.token"13stored_token = "/var/lib/keystone/keystone.token"
14SERVICE_PASSWD_PATH = '/var/lib/keystone/services.passwd'
1315
14def execute(cmd, die=False, echo=False):16def execute(cmd, die=False, echo=False):
15 """ Executes a command 17 """ Executes a command
1618
17 if die=True, script will exit(1) if command does not return 019 if die=True, script will exit(1) if command does not return 0
18 if echo=True, output of command will be printed to stdout20 if echo=True, output of command will be printed to stdout
@@ -79,6 +81,20 @@
79 for k in relation_data:81 for k in relation_data:
80 execute("relation-set %s=%s" % (k, relation_data[k]), die=True)82 execute("relation-set %s=%s" % (k, relation_data[k]), die=True)
8183
84def relation_set_2(**kwargs):
85 cmd = [
86 'relation-set'
87 ]
88 args = []
89 for k, v in kwargs.items():
90 if k == 'rid':
91 cmd.append('-r')
92 cmd.append(v)
93 else:
94 args.append('{}={}'.format(k, v))
95 cmd += args
96 subprocess.check_call(cmd)
97
82def relation_get(relation_data):98def relation_get(relation_data):
83 """ Obtain all current relation data99 """ Obtain all current relation data
84 relation_data is a list of options to query from the relation100 relation_data is a list of options to query from the relation
@@ -149,106 +165,26 @@
149 keystone_conf)165 keystone_conf)
150 error_out('Could not find admin_token line in %s' % keystone_conf)166 error_out('Could not find admin_token line in %s' % keystone_conf)
151167
152def update_config_block(block, **kwargs):168def update_config_block(section, **kwargs):
153 """ Updates keystone.conf blocks given kwargs.169 """ Updates keystone.conf blocks given kwargs.
154 Can be used to update driver settings for a particular backend,170 Update a config setting in a specific setting of a config
155 setting the sql connection, etc.171 file (/etc/keystone/keystone.conf, by default)
156172 """
157 Parses block heading as '[block]'173 if 'file' in kwargs:
158174 conf_file = kwargs['file']
159 If block does not exist, a new block will be created at end of file with175 del kwargs['file']
160 given kwargs176 else:
161 """177 conf_file = keystone_conf
162 f = open(keystone_conf, "r+")178 config = ConfigParser.RawConfigParser()
163 orig = f.readlines()179 config.read(conf_file)
164 new = []180
165 found_block = ""181 if section != 'DEFAULT' and not config.has_section(section):
166 heading = "[%s]\n" % block182 config.add_section(section)
167183
168 lines = len(orig)184 for k, v in kwargs.iteritems():
169 ln = 0185 config.set(section, k, v)
170186 with open(conf_file, 'wb') as out:
171 def update_block(block):187 config.write(out)
172 for k, v in kwargs.iteritems():
173 for l in block:
174 if l.strip().split(" ")[0] == k:
175 block[block.index(l)] = "%s = %s\n" % (k, v)
176 return
177 block.append('%s = %s\n' % (k, v))
178 block.append('\n')
179
180 try:
181 found = False
182 while ln < lines:
183 if orig[ln] != heading:
184 new.append(orig[ln])
185 ln += 1
186 else:
187 new.append(orig[ln])
188 ln += 1
189 block = []
190 while orig[ln].strip() != '':
191 block.append(orig[ln])
192 ln += 1
193 update_block(block)
194 new += block
195 found = True
196
197 if not found:
198 if new[(len(new) - 1)].strip() != '':
199 new.append('\n')
200 new.append('%s' % heading)
201 for k, v in kwargs.iteritems():
202 new.append('%s = %s\n' % (k, v))
203 new.append('\n')
204 except:
205 error_out('Error while attempting to update config block. '\
206 'Refusing to overwite existing config.')
207
208 return
209
210 # backup original config
211 backup = open(keystone_conf + '.juju-back', 'w+')
212 for l in orig:
213 backup.write(l)
214 backup.close()
215
216 # update config
217 f.seek(0)
218 f.truncate()
219 for l in new:
220 f.write(l)
221
222
223def keystone_conf_update(opt, val):
224 """ Updates keystone.conf values
225 If option exists, it is reset to new value
226 If it does not, it added to the top of the config file after the [DEFAULT]
227 heading to keep it out of the paste deploy config
228 """
229 f = open(keystone_conf, "r+")
230 orig = f.readlines()
231 new = ""
232 found = False
233 for l in orig:
234 if l.split(' ')[0] == opt:
235 juju_log("Updating %s, setting %s = %s" % (keystone_conf, opt, val))
236 new += "%s = %s\n" % (opt, val)
237 found = True
238 else:
239 new += l
240 new = new.split('\n')
241 # insert a new value at the top of the file, after the 'DEFAULT' header so
242 # as not to muck up paste deploy configuration later in the file
243 if not found:
244 juju_log("Adding new config option %s = %s" % (opt, val))
245 header = new.index("[DEFAULT]")
246 new.insert((header+1), "%s = %s" % (opt, val))
247 f.seek(0)
248 f.truncate()
249 for l in new:
250 f.write("%s\n" % l)
251 f.close
252188
253def create_service_entry(service_name, service_type, service_desc, owner=None):189def create_service_entry(service_name, service_type, service_desc, owner=None):
254 """ Add a new service entry to keystone if one does not already exist """190 """ Add a new service entry to keystone if one does not already exist """
@@ -264,8 +200,8 @@
264 description=service_desc)200 description=service_desc)
265 juju_log("Created new service entry '%s'" % service_name)201 juju_log("Created new service entry '%s'" % service_name)
266202
267def create_endpoint_template(region, service, public_url, admin_url,203def create_endpoint_template(region, service, publicurl, adminurl,
268 internal_url):204 internalurl):
269 """ Create a new endpoint template for service if one does not already205 """ Create a new endpoint template for service if one does not already
270 exist matching name *and* region """206 exist matching name *and* region """
271 import manager207 import manager
@@ -276,13 +212,24 @@
276 if ep['service_id'] == service_id and ep['region'] == region:212 if ep['service_id'] == service_id and ep['region'] == region:
277 juju_log("Endpoint template already exists for '%s' in '%s'"213 juju_log("Endpoint template already exists for '%s' in '%s'"
278 % (service, region))214 % (service, region))
279 return215
216 up_to_date = True
217 for k in ['publicurl', 'adminurl', 'internalurl']:
218 if ep[k] != locals()[k]:
219 up_to_date = False
220
221 if up_to_date:
222 return
223 else:
224 # delete endpoint and recreate if endpoint urls need updating.
225 juju_log("Updating endpoint template with new endpoint urls.")
226 manager.api.endpoints.delete(ep['id'])
280227
281 manager.api.endpoints.create(region=region,228 manager.api.endpoints.create(region=region,
282 service_id=service_id,229 service_id=service_id,
283 publicurl=public_url,230 publicurl=publicurl,
284 adminurl=admin_url,231 adminurl=adminurl,
285 internalurl=internal_url)232 internalurl=internalurl)
286 juju_log("Created new endpoint template for '%s' in '%s'" %233 juju_log("Created new endpoint template for '%s' in '%s'" %
287 (region, service))234 (region, service))
288235
@@ -411,12 +358,33 @@
411 create_service_entry("keystone", "identity", "Keystone Identity Service")358 create_service_entry("keystone", "identity", "Keystone Identity Service")
412 # following documentation here, perhaps we should be using juju359 # following documentation here, perhaps we should be using juju
413 # public/private addresses for public/internal urls.360 # public/private addresses for public/internal urls.
414 public_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"])361 if is_clustered():
415 admin_url = "http://%s:%s/v2.0" % (config["hostname"], config["admin-port"])362 juju_log("Creating endpoint for clustered configuration")
416 internal_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"])363 for region in config['region'].split():
417 create_endpoint_template("RegionOne", "keystone", public_url,364 create_keystone_endpoint(service_host=config["vip"],
365 service_port=int(config["service-port"]) + 1,
366 auth_host=config["vip"],
367 auth_port=int(config["admin-port"]) + 1,
368 region=region)
369 else:
370 juju_log("Creating standard endpoint")
371 for region in config['region'].split():
372 create_keystone_endpoint(service_host=config["hostname"],
373 service_port=config["service-port"],
374 auth_host=config["hostname"],
375 auth_port=config["admin-port"],
376 region=region)
377
378
379def create_keystone_endpoint(service_host, service_port,
380 auth_host, auth_port, region):
381 public_url = "http://%s:%s/v2.0" % (service_host, service_port)
382 admin_url = "http://%s:%s/v2.0" % (auth_host, auth_port)
383 internal_url = "http://%s:%s/v2.0" % (service_host, service_port)
384 create_endpoint_template(region, "keystone", public_url,
418 admin_url, internal_url)385 admin_url, internal_url)
419386
387
420def update_user_password(username, password):388def update_user_password(username, password):
421 import manager389 import manager
422 manager = manager.KeystoneManager(endpoint='http://localhost:35357/v2.0/',390 manager = manager.KeystoneManager(endpoint='http://localhost:35357/v2.0/',
@@ -430,6 +398,40 @@
430 manager.api.users.update_password(user=user_id, password=password)398 manager.api.users.update_password(user=user_id, password=password)
431 juju_log("Successfully updated password for user '%s'" % username)399 juju_log("Successfully updated password for user '%s'" % username)
432400
401def load_stored_passwords(path=SERVICE_PASSWD_PATH):
402 creds = {}
403 if not os.path.isfile(path):
404 return creds
405
406 stored_passwd = open(path, 'r')
407 for l in stored_passwd.readlines():
408 user, passwd = l.strip().split(':')
409 creds[user] = passwd
410 return creds
411
412def save_stored_passwords(path=SERVICE_PASSWD_PATH, **creds):
413 with open(path, 'wb') as stored_passwd:
414 [stored_passwd.write('%s:%s\n' % (u, p)) for u, p in creds.iteritems()]
415
416def get_service_password(service_username):
417 creds = load_stored_passwords()
418 if service_username in creds:
419 return creds[service_username]
420
421 passwd = subprocess.check_output(['pwgen', '-c', '32', '1']).strip()
422 creds[service_username] = passwd
423 save_stored_passwords(**creds)
424
425 return passwd
426
427def configure_pki_tokens(config):
428 '''Configure PKI token signing, if enabled.'''
429 if config['enable-pki'] not in ['True', 'true']:
430 update_config_block('signing', token_format='UUID')
431 else:
432 juju_log('TODO: PKI Support, setting to UUID for now.')
433 update_config_block('signing', token_format='UUID')
434
433435
434def do_openstack_upgrade(install_src, packages):436def do_openstack_upgrade(install_src, packages):
435 '''Upgrade packages from a given install src.'''437 '''Upgrade packages from a given install src.'''
@@ -474,9 +476,84 @@
474 relation_data["private-address"],476 relation_data["private-address"],
475 config["database"]))477 config["database"]))
476478
477 juju_log('Running database migrations for %s' % new_vers)
478 execute('service keystone stop', echo=True)479 execute('service keystone stop', echo=True)
479 execute('keystone-manage db_sync', echo=True, die=True)480 if ((is_clustered() and is_leader()) or
481 not is_clustered()):
482 juju_log('Running database migrations for %s' % new_vers)
483 execute('keystone-manage db_sync', echo=True, die=True)
484 else:
485 juju_log('Not cluster leader; snoozing whilst leader upgrades DB')
486 time.sleep(10)
480 execute('service keystone start', echo=True)487 execute('service keystone start', echo=True)
481 time.sleep(5)488 time.sleep(5)
482 juju_log('Completed Keystone upgrade: %s -> %s' % (old_vers, new_vers))489 juju_log('Completed Keystone upgrade: %s -> %s' % (old_vers, new_vers))
490
491
492def is_clustered():
493 for r_id in (relation_ids('ha') or []):
494 for unit in (relation_list(r_id) or []):
495 relation_data = \
496 relation_get_dict(relation_id=r_id,
497 remote_unit=unit)
498 if 'clustered' in relation_data:
499 return True
500 return False
501
502
503def is_leader():
504 status = execute('crm resource show res_ks_vip', echo=True)[0].strip()
505 hostname = execute('hostname', echo=True)[0].strip()
506 if hostname in status:
507 return True
508 else:
509 return False
510
511
512def peer_units():
513 peers = []
514 for r_id in (relation_ids('cluster') or []):
515 for unit in (relation_list(r_id) or []):
516 peers.append(unit)
517 return peers
518
519def oldest_peer(peers):
520 local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1]
521 for peer in peers:
522 remote_unit_no = peer.split('/')[1]
523 if remote_unit_no < local_unit_no:
524 return False
525 return True
526
527
528def eligible_leader():
529 if is_clustered():
530 if not is_leader():
531 juju_log('Deferring action to CRM leader.')
532 return False
533 else:
534 peers = peer_units()
535 if peers and not oldest_peer(peers):
536 juju_log('Deferring action to oldest service unit.')
537 return False
538 return True
539
540
541def synchronize_service_credentials():
542 '''
543 Broadcast service credentials to peers or consume those that have been
544 broadcasted by peer, depending on hook context.
545 '''
546 if os.path.basename(sys.argv[0]) == 'cluster-relation-changed':
547 r_data = relation_get_dict()
548 if 'service_credentials' in r_data:
549 juju_log('Saving service passwords from peer.')
550 save_stored_passwords(**json.loads(r_data['service_credentials']))
551 return
552
553 creds = load_stored_passwords()
554 if not creds:
555 return
556 juju_log('Synchronizing service passwords to all peers.')
557 creds = json.dumps(creds)
558 for r_id in (relation_ids('cluster') or []):
559 relation_set_2(rid=r_id, service_credentials=creds)
483560
=== modified file 'metadata.yaml'
--- metadata.yaml 2012-06-07 17:43:42 +0000
+++ metadata.yaml 2013-02-22 19:26:19 +0000
@@ -11,3 +11,9 @@
11requires:11requires:
12 shared-db:12 shared-db:
13 interface: mysql-shared13 interface: mysql-shared
14 ha:
15 interface: hacluster
16 scope: container
17peers:
18 cluster:
19 interface: keystone-ha
1420
=== added file 'remove_from_cluster'
--- remove_from_cluster 1970-01-01 00:00:00 +0000
+++ remove_from_cluster 2013-02-22 19:26:19 +0000
@@ -0,0 +1,2 @@
1#!/bin/bash
2crm node standby
03
=== modified file 'revision'
--- revision 2012-12-12 03:52:01 +0000
+++ revision 2013-02-22 19:26:19 +0000
@@ -1,1 +1,1 @@
11651197
22
=== added directory 'templates'
=== added file 'templates/haproxy.cfg'
--- templates/haproxy.cfg 1970-01-01 00:00:00 +0000
+++ templates/haproxy.cfg 2013-02-22 19:26:19 +0000
@@ -0,0 +1,35 @@
1global
2 log 127.0.0.1 local0
3 log 127.0.0.1 local1 notice
4 maxconn 4096
5 user haproxy
6 group haproxy
7 spread-checks 0
8
9defaults
10 log global
11 mode http
12 option httplog
13 option dontlognull
14 retries 3
15 timeout queue 1000
16 timeout connect 1000
17 timeout client 10000
18 timeout server 10000
19
20listen stats :8888
21 mode http
22 stats enable
23 stats hide-version
24 stats realm Haproxy\ Statistics
25 stats uri /
26 stats auth admin:password
27
28{% for service, port in service_ports.iteritems() -%}
29listen {{ service }} 0.0.0.0:{{ port }}
30 balance roundrobin
31 option tcplog
32 {% for unit, address in units.iteritems() -%}
33 server {{ unit }} {{ address }}:{{ port - 1 }} check
34 {% endfor %}
35{% endfor %}

Subscribers

People subscribed via source and target branches

to all changes: