Merge lp:~chad.smith/charms/precise/keystone/ha-support into lp:~charmers/charms/precise/keystone/trunk

Proposed by Chad Smith
Status: Superseded
Proposed branch: lp:~chad.smith/charms/precise/keystone/ha-support
Merge into: lp:~charmers/charms/precise/keystone/trunk
Diff against target: 1009 lines (+554/-155)
11 files modified
add_to_cluster (+2/-0)
config.yaml (+37/-0)
health_checks.d/service_ports_live (+13/-0)
health_checks.d/service_running (+13/-0)
hooks/keystone-hooks (+184/-38)
hooks/lib/openstack_common.py (+72/-4)
hooks/utils.py (+189/-112)
metadata.yaml (+6/-0)
remove_from_cluster (+2/-0)
revision (+1/-1)
templates/haproxy.cfg (+35/-0)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/keystone/ha-support
Reviewer Review Type Date Requested Status
Adam Gandelman Pending
Review via email: mp+149883@code.launchpad.net

This proposal has been superseded by a proposal from 2013-02-22.

Description of the change

This is a merge proposal to establish a potential template for health_script.d, add_to_cluster and remove_from_cluster delivery within the ha-supported openstack charms.

This is intended as a talking point and guidance for what landscape-client will expect on openstack charms supporting an HA services. If there is anything that doesn't look acceptable, I'm game for a change but wanted to pull a template together to seed discussions. Any suggestions about better charm-related ways.

These script locations will be hard-coded within landscape-client and will be run during rolling Openstack upgrades. So, prior to ODS, we'd like to make sure this process makes sense for most haclustered services.

The example of landscape-client's interaction with charm scripts is our HAServiceManager plugin at https://code.launchpad.net/~chad.smith/landscape-client/ha-manager-skeleton/+merge/148593

1. Before any packages are upgraded, landscape-client will only call /var/lib/juju/units/<unit_name>/charm/remove_from_cluster.
    - remove_from_cluster should will place the node in crm standby state, thereby migrating any active HA resources off of the current node.
2. landscape-client will upgrade any packages and any additional package dependencies will be installed.
3. Upon successful upgrade (maybe involving a reboot), landscape-client will run all run-parts health scripts from /var/lib/juju/units/<unit_name>/charm/health_scripts.d. These health scripts will need to return success 0 or fail (non-zero) exit codes to indicate any issues.
4. If all health scripts succeed, landscape-client will run /var/lib/juju/units/<unit_name>/charm/add_to_cluster to bring the node online in the HA cluster again.

  As James mentioned in email, remove_from_cluster script might take more action to cleanly remove an API service from openstack schedulers or configs. If a charm's remove_from_cluster does more than just migrate HA resources away from the node, then we'll need to discuss how to potentially decompose add_to_cluster functionality into separate scripts. One script that allows us to ensure a local openstack API is running locally and can be "health checked" and a separate script add_to_cluster which only finalizes and activates configuration in the HA service.

To post a comment you must log in.
Revision history for this message
Adam Gandelman (gandelman-a) wrote :

Hey Chad-

You targeted this merge into the upstream charm @ lp:charms/keystone, can you retarget toward our WIP HA branch @ lp:~openstack-charmers/charms/precise/keystone/ha-support ?

The general approach here LGTM. But I'd love for things to be made a bit more generic so we can just drop this work into other charms unmodified:

- Perhaps save_script_rc() can to be moved to lib/openstack_common.py without the hard-coded KEYSTONE_SERVICE_NAME, and the caller can add that to the kwargs instead?

- I imagine there are some tests that would be common across all charms (service_running, service_ports_live) It would be great if those can be synced to other charms unmodified as well. Eg, perhaps have the ports look for *_OPENSTACK_PORT in the environment and ensure each one? The services check can determine what it should check based on *_OPENSTACK_SERVICE_NAME? Not sure if these common tests should live in some directory other than the tests specific to each service. /var/lib/juju/units/$unit/healthchecks/common.d/ & /var/lib/juju/units/$unit/healthchecks/$unit.d/ ??

End goal for me is to have the common framework for the health checking live with the other common code we maintain, and we can just sync it to other charms when we get it right in one. Of course this is complicated by the combination of bash + python charms we currently maintain, but still doable I think.

Revision history for this message
Chad Smith (chad.smith) wrote :

Good points Adam. I'll make save_script_rc more generic and the health scripts will be a bit smarter to look for *OPENSTACK* vars where necessary.

I'm pulled onto something else at the moment, but I'll repost this review against the right source branch and get a change to you either today or tomorrow morning.

56. By Chad Smith

move now-generic save_script_rc to lib/openstack_common.py. Update health scripts to be more flexible based on OPENSTACK_PORT* and OPENSTACK_SERVICE* environment variables

57. By Chad Smith

move add_cluster, remove_from_cluster and health_checks.d to a new scripts subdir

58. By Chad Smith

more strict netstat port matching

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'add_to_cluster'
2--- add_to_cluster 1970-01-01 00:00:00 +0000
3+++ add_to_cluster 2013-02-22 19:26:19 +0000
4@@ -0,0 +1,2 @@
5+#!/bin/bash
6+crm node online
7
8=== modified file 'config.yaml'
9--- config.yaml 2012-10-12 17:26:48 +0000
10+++ config.yaml 2013-02-22 19:26:19 +0000
11@@ -26,6 +26,10 @@
12 default: "/etc/keystone/keystone.conf"
13 type: string
14 description: "Location of keystone configuration file"
15+ log-level:
16+ default: WARNING
17+ type: string
18+ description: Log level (WARNING, INFO, DEBUG, ERROR)
19 service-port:
20 default: 5000
21 type: int
22@@ -75,3 +79,36 @@
23 default: "keystone"
24 type: string
25 description: "Database username"
26+ region:
27+ default: RegionOne
28+ type: string
29+ description: "OpenStack Region(s) - separate multiple regions with single space"
30+ # HA configuration settings
31+ vip:
32+ type: string
33+ description: "Virtual IP to use to front keystone in ha configuration"
34+ vip_iface:
35+ type: string
36+ default: eth0
37+ description: "Network Interface where to place the Virtual IP"
38+ vip_cidr:
39+ type: int
40+ default: 24
41+ description: "Netmask that will be used for the Virtual IP"
42+ ha-bindiface:
43+ type: string
44+ default: eth0
45+ description: |
46+ Default network interface on which HA cluster will bind to communication
47+ with the other members of the HA Cluster.
48+ ha-mcastport:
49+ type: int
50+ default: 5403
51+ description: |
52+ Default multicast port number that will be used to communicate between
53+ HA Cluster nodes.
54+ # PKI enablement and configuration (Grizzly and beyond)
55+ enable-pki:
56+ default: "false"
57+ type: string
58+ description: "Enable PKI token signing (Grizzly and beyond)"
59
60=== added directory 'health_checks.d'
61=== added file 'health_checks.d/service_ports_live'
62--- health_checks.d/service_ports_live 1970-01-01 00:00:00 +0000
63+++ health_checks.d/service_ports_live 2013-02-22 19:26:19 +0000
64@@ -0,0 +1,13 @@
65+#!/bin/bash
66+# Validate that service ports are active
67+HEALTH_DIR=`dirname $0`
68+UNIT_DIR=`dirname $HEALTH_DIR`
69+. $UNIT_DIR/scriptrc
70+set -e
71+
72+# Grab any OPENSTACK_PORT* environment variables
73+openstack_ports=`env| awk -F '=' '(/OPENSTACK_PORT/){print $2}'`
74+for port in $openstack_ports
75+do
76+ netstat -ln | grep -q $port
77+done
78
79=== added file 'health_checks.d/service_running'
80--- health_checks.d/service_running 1970-01-01 00:00:00 +0000
81+++ health_checks.d/service_running 2013-02-22 19:26:19 +0000
82@@ -0,0 +1,13 @@
83+#!/bin/bash
84+# Validate that service is running
85+HEALTH_DIR=`dirname $0`
86+UNIT_DIR=`dirname $HEALTH_DIR`
87+. $UNIT_DIR/scriptrc
88+set -e
89+
90+# Grab any OPENSTACK_SERVICE* environment variables
91+openstack_service_names=`env| awk -F '=' '(/OPENSTACK_SERVICE/){print $2}'`
92+for service_name in $openstack_service_names
93+do
94+ service $service_name status | grep -q running
95+done
96
97=== added symlink 'hooks/cluster-relation-changed'
98=== target is u'keystone-hooks'
99=== added symlink 'hooks/cluster-relation-departed'
100=== target is u'keystone-hooks'
101=== added symlink 'hooks/ha-relation-changed'
102=== target is u'keystone-hooks'
103=== added symlink 'hooks/ha-relation-joined'
104=== target is u'keystone-hooks'
105=== modified file 'hooks/keystone-hooks'
106--- hooks/keystone-hooks 2012-12-12 03:52:01 +0000
107+++ hooks/keystone-hooks 2013-02-22 19:26:19 +0000
108@@ -8,7 +8,7 @@
109
110 config = config_get()
111
112-packages = "keystone python-mysqldb pwgen"
113+packages = "keystone python-mysqldb pwgen haproxy python-jinja2"
114 service = "keystone"
115
116 # used to verify joined services are valid openstack components.
117@@ -46,6 +46,14 @@
118 "quantum": {
119 "type": "network",
120 "desc": "Quantum Networking Service"
121+ },
122+ "oxygen": {
123+ "type": "oxygen",
124+ "desc": "Oxygen Cloud Image Service"
125+ },
126+ "ceilometer": {
127+ "type": "metering",
128+ "desc": "Ceilometer Metering Service"
129 }
130 }
131
132@@ -69,12 +77,14 @@
133 driver='keystone.token.backends.sql.Token')
134 update_config_block('ec2',
135 driver='keystone.contrib.ec2.backends.sql.Ec2')
136+
137 execute("service keystone stop", echo=True)
138 execute("keystone-manage db_sync")
139 execute("service keystone start", echo=True)
140 time.sleep(5)
141 ensure_initial_admin(config)
142
143+
144 def db_joined():
145 relation_data = { "database": config["database"],
146 "username": config["database-user"],
147@@ -84,15 +94,22 @@
148 def db_changed():
149 relation_data = relation_get_dict()
150 if ('password' not in relation_data or
151- 'private-address' not in relation_data):
152- juju_log("private-address or password not set. Peer not ready, exit 0")
153+ 'db_host' not in relation_data):
154+ juju_log("db_host or password not set. Peer not ready, exit 0")
155 exit(0)
156 update_config_block('sql', connection="mysql://%s:%s@%s/%s" %
157 (config["database-user"],
158 relation_data["password"],
159- relation_data["private-address"],
160+ relation_data["db_host"],
161 config["database"]))
162+
163 execute("service keystone stop", echo=True)
164+
165+ if not eligible_leader():
166+ juju_log('Deferring DB initialization to service leader.')
167+ execute("service keystone start")
168+ return
169+
170 execute("keystone-manage db_sync", echo=True)
171 execute("service keystone start")
172 time.sleep(5)
173@@ -124,18 +141,30 @@
174 realtion_set({ "admin_token": -1 })
175 return
176
177- def add_endpoint(region, service, public_url, admin_url, internal_url):
178+ def add_endpoint(region, service, publicurl, adminurl, internalurl):
179 desc = valid_services[service]["desc"]
180 service_type = valid_services[service]["type"]
181 create_service_entry(service, service_type, desc)
182 create_endpoint_template(region=region, service=service,
183- public_url=public_url,
184- admin_url=admin_url,
185- internal_url=internal_url)
186+ publicurl=publicurl,
187+ adminurl=adminurl,
188+ internalurl=internalurl)
189+
190+ if not eligible_leader():
191+ juju_log('Deferring identity_changed() to service leader.')
192+ return
193
194 settings = relation_get_dict(relation_id=relation_id,
195 remote_unit=remote_unit)
196
197+ # Allow the remote service to request creation of any additional roles.
198+ # Currently used by Swift.
199+ if 'requested_roles' in settings and settings['requested_roles'] != 'None':
200+ roles = settings['requested_roles'].split(',')
201+ juju_log("Creating requested roles: %s" % roles)
202+ for role in roles:
203+ create_role(role, user=config['admin-user'], tenant='admin')
204+
205 # the minimum settings needed per endpoint
206 single = set(['service', 'region', 'public_url', 'admin_url',
207 'internal_url'])
208@@ -145,13 +174,27 @@
209 if 'None' in [v for k,v in settings.iteritems()]:
210 # Some backend services advertise no endpoint but require a
211 # hook execution to update auth strategy.
212+ relation_data = {}
213+ # Check if clustered and use vip + haproxy ports if so
214+ if is_clustered():
215+ relation_data["auth_host"] = config['vip']
216+ relation_data["auth_port"] = SERVICE_PORTS['keystone_admin']
217+ relation_data["service_host"] = config['vip']
218+ relation_data["service_port"] = SERVICE_PORTS['keystone_service']
219+ else:
220+ relation_data["auth_host"] = config['hostname']
221+ relation_data["auth_port"] = config['auth-port']
222+ relation_data["service_host"] = config['hostname']
223+ relation_data["service_port"] = config['service-port']
224+ relation_set(relation_data)
225 return
226
227+
228 ensure_valid_service(settings['service'])
229 add_endpoint(region=settings['region'], service=settings['service'],
230- public_url=settings['public_url'],
231- admin_url=settings['admin_url'],
232- internal_url=settings['internal_url'])
233+ publicurl=settings['public_url'],
234+ adminurl=settings['admin_url'],
235+ internalurl=settings['internal_url'])
236 service_username = settings['service']
237 else:
238 # assemble multiple endpoints from relation data. service name
239@@ -186,9 +229,9 @@
240 ep = endpoints[ep]
241 ensure_valid_service(ep['service'])
242 add_endpoint(region=ep['region'], service=ep['service'],
243- public_url=ep['public_url'],
244- admin_url=ep['admin_url'],
245- internal_url=ep['internal_url'])
246+ publicurl=ep['public_url'],
247+ adminurl=ep['admin_url'],
248+ internalurl=ep['internal_url'])
249 services.append(ep['service'])
250 service_username = '_'.join(services)
251
252@@ -201,26 +244,10 @@
253 token = get_admin_token()
254 juju_log("Creating service credentials for '%s'" % service_username)
255
256- stored_passwd = '/var/lib/keystone/%s.passwd' % service_username
257- if os.path.isfile(stored_passwd):
258- juju_log("Loading stored service passwd from %s" % stored_passwd)
259- service_password = open(stored_passwd, 'r').readline().strip('\n')
260- else:
261- juju_log("Generating a new service password for %s" % service_username)
262- service_password = execute('pwgen -c 32 1', die=True)[0].strip()
263- open(stored_passwd, 'w+').writelines("%s\n" % service_password)
264-
265+ service_password = get_service_password(service_username)
266 create_user(service_username, service_password, config['service-tenant'])
267 grant_role(service_username, config['admin-role'], config['service-tenant'])
268
269- # Allow the remote service to request creation of any additional roles.
270- # Currently used by Swift.
271- if 'requested_roles' in settings:
272- roles = settings['requested_roles'].split(',')
273- juju_log("Creating requested roles: %s" % roles)
274- for role in roles:
275- create_role(role, user=config['admin-user'], tenant='admin')
276-
277 # As of https://review.openstack.org/#change,4675, all nodes hosting
278 # an endpoint(s) needs a service username and password assigned to
279 # the service tenant and granted admin role.
280@@ -237,7 +264,15 @@
281 "service_password": service_password,
282 "service_tenant": config['service-tenant']
283 }
284+ # Check if clustered and use vip + haproxy ports if so
285+ if is_clustered():
286+ relation_data["auth_host"] = config['vip']
287+ relation_data["auth_port"] = SERVICE_PORTS['keystone_admin']
288+ relation_data["service_host"] = config['vip']
289+ relation_data["service_port"] = SERVICE_PORTS['keystone_service']
290+
291 relation_set(relation_data)
292+ synchronize_service_credentials()
293
294 def config_changed():
295
296@@ -246,11 +281,117 @@
297 available = get_os_codename_install_source(config['openstack-origin'])
298 installed = get_os_codename_package('keystone')
299
300- if get_os_version_codename(available) > get_os_version_codename(installed):
301+ if (available and
302+ get_os_version_codename(available) > get_os_version_codename(installed)):
303 do_openstack_upgrade(config['openstack-origin'], packages)
304
305+ env_vars = {'OPENSTACK_SERVICE_KEYSTONE': 'keystone',
306+ 'OPENSTACK_PORT_ADMIN': config['admin-port'],
307+ 'OPENSTACK_PORT_PUBLIC': config['service-port']}
308+ save_script_rc(**env_vars)
309+
310 set_admin_token(config['admin-token'])
311- ensure_initial_admin(config)
312+
313+ if eligible_leader():
314+ juju_log('Cluster leader - ensuring endpoint configuration is up to date')
315+ ensure_initial_admin(config)
316+
317+ update_config_block('logger_root', level=config['log-level'],
318+ file='/etc/keystone/logging.conf')
319+ if get_os_version_package('keystone') >= '2013.1':
320+ # PKI introduced in Grizzly
321+ configure_pki_tokens(config)
322+
323+ execute("service keystone restart", echo=True)
324+ cluster_changed()
325+
326+
327+def upgrade_charm():
328+ cluster_changed()
329+ if eligible_leader():
330+ juju_log('Cluster leader - ensuring endpoint configuration is up to date')
331+ ensure_initial_admin(config)
332+
333+
334+SERVICE_PORTS = {
335+ "keystone_admin": int(config['admin-port']) + 1,
336+ "keystone_service": int(config['service-port']) + 1
337+ }
338+
339+
340+def cluster_changed():
341+ cluster_hosts = {}
342+ cluster_hosts['self'] = config['hostname']
343+ for r_id in relation_ids('cluster'):
344+ for unit in relation_list(r_id):
345+ cluster_hosts[unit.replace('/','-')] = \
346+ relation_get_dict(relation_id=r_id,
347+ remote_unit=unit)['private-address']
348+ configure_haproxy(cluster_hosts,
349+ SERVICE_PORTS)
350+
351+ synchronize_service_credentials()
352+
353+
354+def ha_relation_changed():
355+ relation_data = relation_get_dict()
356+ if ('clustered' in relation_data and
357+ is_leader()):
358+ juju_log('Cluster configured, notifying other services and updating'
359+ 'keystone endpoint configuration')
360+ # Update keystone endpoint to point at VIP
361+ ensure_initial_admin(config)
362+ # Tell all related services to start using
363+ # the VIP and haproxy ports instead
364+ for r_id in relation_ids('identity-service'):
365+ relation_set_2(rid=r_id,
366+ auth_host=config['vip'],
367+ service_host=config['vip'],
368+ service_port=SERVICE_PORTS['keystone_service'],
369+ auth_port=SERVICE_PORTS['keystone_admin'])
370+
371+
372+def ha_relation_joined():
373+ # Obtain the config values necessary for the cluster config. These
374+ # include multicast port and interface to bind to.
375+ corosync_bindiface = config['ha-bindiface']
376+ corosync_mcastport = config['ha-mcastport']
377+
378+ # Obtain resources
379+ resources = {
380+ 'res_ks_vip':'ocf:heartbeat:IPaddr2',
381+ 'res_ks_haproxy':'lsb:haproxy'
382+ }
383+ # TODO: Obtain netmask and nic where to place VIP.
384+ resource_params = {
385+ 'res_ks_vip':'params ip="%s" cidr_netmask="%s" nic="%s"' % (config['vip'],
386+ config['vip_cidr'], config['vip_iface']),
387+ 'res_ks_haproxy':'op monitor interval="5s"'
388+ }
389+ init_services = {
390+ 'res_ks_haproxy':'haproxy'
391+ }
392+ groups = {
393+ 'grp_ks_haproxy':'res_ks_vip res_ks_haproxy'
394+ }
395+ #clones = {
396+ # 'cln_ks_haproxy':'res_ks_haproxy meta globally-unique="false" interleave="true"'
397+ # }
398+
399+ #orders = {
400+ # 'ord_vip_before_haproxy':'inf: res_ks_vip res_ks_haproxy'
401+ # }
402+ #colocations = {
403+ # 'col_vip_on_haproxy':'inf: res_ks_haproxy res_ks_vip'
404+ # }
405+
406+ relation_set_2(init_services=init_services,
407+ corosync_bindiface=corosync_bindiface,
408+ corosync_mcastport=corosync_mcastport,
409+ resources=resources,
410+ resource_params=resource_params,
411+ groups=groups)
412+
413
414 hooks = {
415 "install": install_hook,
416@@ -258,12 +399,17 @@
417 "shared-db-relation-changed": db_changed,
418 "identity-service-relation-joined": identity_joined,
419 "identity-service-relation-changed": identity_changed,
420- "config-changed": config_changed
421+ "config-changed": config_changed,
422+ "cluster-relation-changed": cluster_changed,
423+ "cluster-relation-departed": cluster_changed,
424+ "ha-relation-joined": ha_relation_joined,
425+ "ha-relation-changed": ha_relation_changed,
426+ "upgrade-charm": upgrade_charm
427 }
428
429 # keystone-hooks gets called by symlink corresponding to the requested relation
430 # hook.
431-arg0 = sys.argv[0].split("/").pop()
432-if arg0 not in hooks.keys():
433- error_out("Unsupported hook: %s" % arg0)
434-hooks[arg0]()
435+hook = os.path.basename(sys.argv[0])
436+if hook not in hooks.keys():
437+ error_out("Unsupported hook: %s" % hook)
438+hooks[hook]()
439
440=== modified file 'hooks/lib/openstack_common.py'
441--- hooks/lib/openstack_common.py 2012-12-05 20:35:05 +0000
442+++ hooks/lib/openstack_common.py 2013-02-22 19:26:19 +0000
443@@ -3,6 +3,7 @@
444 # Common python helper functions used for OpenStack charms.
445
446 import subprocess
447+import os
448
449 CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
450 CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
451@@ -22,6 +23,12 @@
452 '2013.1': 'grizzly'
453 }
454
455+# The ugly duckling
456+swift_codenames = {
457+ '1.4.3': 'diablo',
458+ '1.4.8': 'essex',
459+ '1.7.4': 'folsom'
460+}
461
462 def juju_log(msg):
463 subprocess.check_call(['juju-log', msg])
464@@ -118,12 +125,32 @@
465
466 vers = vers[:6]
467 try:
468- return openstack_codenames[vers]
469+ if 'swift' in pkg:
470+ vers = vers[:5]
471+ return swift_codenames[vers]
472+ else:
473+ vers = vers[:6]
474+ return openstack_codenames[vers]
475 except KeyError:
476 e = 'Could not determine OpenStack codename for version %s' % vers
477 error_out(e)
478
479
480+def get_os_version_package(pkg):
481+ '''Derive OpenStack version number from an installed package.'''
482+ codename = get_os_codename_package(pkg)
483+
484+ if 'swift' in pkg:
485+ vers_map = swift_codenames
486+ else:
487+ vers_map = openstack_codenames
488+
489+ for version, cname in vers_map.iteritems():
490+ if cname == codename:
491+ return version
492+ e = "Could not determine OpenStack version for package: %s" % pkg
493+ error_out(e)
494+
495 def configure_installation_source(rel):
496 '''Configure apt installation source.'''
497
498@@ -164,9 +191,11 @@
499 'version (%s)' % (ca_rel, ubuntu_rel)
500 error_out(e)
501
502- if ca_rel == 'folsom/staging':
503+ if 'staging' in ca_rel:
504 # staging is just a regular PPA.
505- cmd = 'add-apt-repository -y ppa:ubuntu-cloud-archive/folsom-staging'
506+ os_rel = ca_rel.split('/')[0]
507+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
508+ cmd = 'add-apt-repository -y %s' % ppa
509 subprocess.check_call(cmd.split(' '))
510 return
511
512@@ -174,7 +203,10 @@
513 pockets = {
514 'folsom': 'precise-updates/folsom',
515 'folsom/updates': 'precise-updates/folsom',
516- 'folsom/proposed': 'precise-proposed/folsom'
517+ 'folsom/proposed': 'precise-proposed/folsom',
518+ 'grizzly': 'precise-updates/grizzly',
519+ 'grizzly/updates': 'precise-updates/grizzly',
520+ 'grizzly/proposed': 'precise-proposed/grizzly'
521 }
522
523 try:
524@@ -191,3 +223,39 @@
525 else:
526 error_out("Invalid openstack-release specified: %s" % rel)
527
528+HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
529+HAPROXY_DEFAULT = '/etc/default/haproxy'
530+
531+def configure_haproxy(units, service_ports, template_dir=None):
532+ template_dir = template_dir or 'templates'
533+ import jinja2
534+ context = {
535+ 'units': units,
536+ 'service_ports': service_ports
537+ }
538+ templates = jinja2.Environment(
539+ loader=jinja2.FileSystemLoader(template_dir)
540+ )
541+ template = templates.get_template(
542+ os.path.basename(HAPROXY_CONF)
543+ )
544+ with open(HAPROXY_CONF, 'w') as f:
545+ f.write(template.render(context))
546+ with open(HAPROXY_DEFAULT, 'w') as f:
547+ f.write('ENABLED=1')
548+
549+def save_script_rc(script_path="scriptrc", **env_vars):
550+ """
551+ Write an rc file in the charm-delivered directory containing
552+ exported environment variables provided by env_vars. Any charm scripts run
553+ outside the juju hook environment can source this scriptrc to obtain
554+ updated config information necessary to perform health checks or
555+ service changes.
556+ """
557+ unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-')
558+ juju_rc_path="/var/lib/juju/units/%s/charm/%s" % (unit_name, script_path)
559+ with open(juju_rc_path, 'wb') as rc_script:
560+ rc_script.write(
561+ "#!/bin/bash\n")
562+ [rc_script.write('export %s=%s\n' % (u, p))
563+ for u, p in env_vars.iteritems() if u != "script_path"]
564
565=== added symlink 'hooks/upgrade-charm'
566=== target is u'keystone-hooks'
567=== modified file 'hooks/utils.py'
568--- hooks/utils.py 2012-12-12 03:52:41 +0000
569+++ hooks/utils.py 2013-02-22 19:26:19 +0000
570@@ -1,4 +1,5 @@
571 #!/usr/bin/python
572+import ConfigParser
573 import subprocess
574 import sys
575 import json
576@@ -10,9 +11,10 @@
577 keystone_conf = "/etc/keystone/keystone.conf"
578 stored_passwd = "/var/lib/keystone/keystone.passwd"
579 stored_token = "/var/lib/keystone/keystone.token"
580+SERVICE_PASSWD_PATH = '/var/lib/keystone/services.passwd'
581
582 def execute(cmd, die=False, echo=False):
583- """ Executes a command
584+ """ Executes a command
585
586 if die=True, script will exit(1) if command does not return 0
587 if echo=True, output of command will be printed to stdout
588@@ -79,6 +81,20 @@
589 for k in relation_data:
590 execute("relation-set %s=%s" % (k, relation_data[k]), die=True)
591
592+def relation_set_2(**kwargs):
593+ cmd = [
594+ 'relation-set'
595+ ]
596+ args = []
597+ for k, v in kwargs.items():
598+ if k == 'rid':
599+ cmd.append('-r')
600+ cmd.append(v)
601+ else:
602+ args.append('{}={}'.format(k, v))
603+ cmd += args
604+ subprocess.check_call(cmd)
605+
606 def relation_get(relation_data):
607 """ Obtain all current relation data
608 relation_data is a list of options to query from the relation
609@@ -149,106 +165,26 @@
610 keystone_conf)
611 error_out('Could not find admin_token line in %s' % keystone_conf)
612
613-def update_config_block(block, **kwargs):
614+def update_config_block(section, **kwargs):
615 """ Updates keystone.conf blocks given kwargs.
616- Can be used to update driver settings for a particular backend,
617- setting the sql connection, etc.
618-
619- Parses block heading as '[block]'
620-
621- If block does not exist, a new block will be created at end of file with
622- given kwargs
623- """
624- f = open(keystone_conf, "r+")
625- orig = f.readlines()
626- new = []
627- found_block = ""
628- heading = "[%s]\n" % block
629-
630- lines = len(orig)
631- ln = 0
632-
633- def update_block(block):
634- for k, v in kwargs.iteritems():
635- for l in block:
636- if l.strip().split(" ")[0] == k:
637- block[block.index(l)] = "%s = %s\n" % (k, v)
638- return
639- block.append('%s = %s\n' % (k, v))
640- block.append('\n')
641-
642- try:
643- found = False
644- while ln < lines:
645- if orig[ln] != heading:
646- new.append(orig[ln])
647- ln += 1
648- else:
649- new.append(orig[ln])
650- ln += 1
651- block = []
652- while orig[ln].strip() != '':
653- block.append(orig[ln])
654- ln += 1
655- update_block(block)
656- new += block
657- found = True
658-
659- if not found:
660- if new[(len(new) - 1)].strip() != '':
661- new.append('\n')
662- new.append('%s' % heading)
663- for k, v in kwargs.iteritems():
664- new.append('%s = %s\n' % (k, v))
665- new.append('\n')
666- except:
667- error_out('Error while attempting to update config block. '\
668- 'Refusing to overwite existing config.')
669-
670- return
671-
672- # backup original config
673- backup = open(keystone_conf + '.juju-back', 'w+')
674- for l in orig:
675- backup.write(l)
676- backup.close()
677-
678- # update config
679- f.seek(0)
680- f.truncate()
681- for l in new:
682- f.write(l)
683-
684-
685-def keystone_conf_update(opt, val):
686- """ Updates keystone.conf values
687- If option exists, it is reset to new value
688- If it does not, it added to the top of the config file after the [DEFAULT]
689- heading to keep it out of the paste deploy config
690- """
691- f = open(keystone_conf, "r+")
692- orig = f.readlines()
693- new = ""
694- found = False
695- for l in orig:
696- if l.split(' ')[0] == opt:
697- juju_log("Updating %s, setting %s = %s" % (keystone_conf, opt, val))
698- new += "%s = %s\n" % (opt, val)
699- found = True
700- else:
701- new += l
702- new = new.split('\n')
703- # insert a new value at the top of the file, after the 'DEFAULT' header so
704- # as not to muck up paste deploy configuration later in the file
705- if not found:
706- juju_log("Adding new config option %s = %s" % (opt, val))
707- header = new.index("[DEFAULT]")
708- new.insert((header+1), "%s = %s" % (opt, val))
709- f.seek(0)
710- f.truncate()
711- for l in new:
712- f.write("%s\n" % l)
713- f.close
714+ Update a config setting in a specific setting of a config
715+ file (/etc/keystone/keystone.conf, by default)
716+ """
717+ if 'file' in kwargs:
718+ conf_file = kwargs['file']
719+ del kwargs['file']
720+ else:
721+ conf_file = keystone_conf
722+ config = ConfigParser.RawConfigParser()
723+ config.read(conf_file)
724+
725+ if section != 'DEFAULT' and not config.has_section(section):
726+ config.add_section(section)
727+
728+ for k, v in kwargs.iteritems():
729+ config.set(section, k, v)
730+ with open(conf_file, 'wb') as out:
731+ config.write(out)
732
733 def create_service_entry(service_name, service_type, service_desc, owner=None):
734 """ Add a new service entry to keystone if one does not already exist """
735@@ -264,8 +200,8 @@
736 description=service_desc)
737 juju_log("Created new service entry '%s'" % service_name)
738
739-def create_endpoint_template(region, service, public_url, admin_url,
740- internal_url):
741+def create_endpoint_template(region, service, publicurl, adminurl,
742+ internalurl):
743 """ Create a new endpoint template for service if one does not already
744 exist matching name *and* region """
745 import manager
746@@ -276,13 +212,24 @@
747 if ep['service_id'] == service_id and ep['region'] == region:
748 juju_log("Endpoint template already exists for '%s' in '%s'"
749 % (service, region))
750- return
751+
752+ up_to_date = True
753+ for k in ['publicurl', 'adminurl', 'internalurl']:
754+ if ep[k] != locals()[k]:
755+ up_to_date = False
756+
757+ if up_to_date:
758+ return
759+ else:
760+ # delete endpoint and recreate if endpoint urls need updating.
761+ juju_log("Updating endpoint template with new endpoint urls.")
762+ manager.api.endpoints.delete(ep['id'])
763
764 manager.api.endpoints.create(region=region,
765 service_id=service_id,
766- publicurl=public_url,
767- adminurl=admin_url,
768- internalurl=internal_url)
769+ publicurl=publicurl,
770+ adminurl=adminurl,
771+ internalurl=internalurl)
772 juju_log("Created new endpoint template for '%s' in '%s'" %
773 (region, service))
774
775@@ -411,12 +358,33 @@
776 create_service_entry("keystone", "identity", "Keystone Identity Service")
777 # following documentation here, perhaps we should be using juju
778 # public/private addresses for public/internal urls.
779- public_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"])
780- admin_url = "http://%s:%s/v2.0" % (config["hostname"], config["admin-port"])
781- internal_url = "http://%s:%s/v2.0" % (config["hostname"], config["service-port"])
782- create_endpoint_template("RegionOne", "keystone", public_url,
783+ if is_clustered():
784+ juju_log("Creating endpoint for clustered configuration")
785+ for region in config['region'].split():
786+ create_keystone_endpoint(service_host=config["vip"],
787+ service_port=int(config["service-port"]) + 1,
788+ auth_host=config["vip"],
789+ auth_port=int(config["admin-port"]) + 1,
790+ region=region)
791+ else:
792+ juju_log("Creating standard endpoint")
793+ for region in config['region'].split():
794+ create_keystone_endpoint(service_host=config["hostname"],
795+ service_port=config["service-port"],
796+ auth_host=config["hostname"],
797+ auth_port=config["admin-port"],
798+ region=region)
799+
800+
801+def create_keystone_endpoint(service_host, service_port,
802+ auth_host, auth_port, region):
803+ public_url = "http://%s:%s/v2.0" % (service_host, service_port)
804+ admin_url = "http://%s:%s/v2.0" % (auth_host, auth_port)
805+ internal_url = "http://%s:%s/v2.0" % (service_host, service_port)
806+ create_endpoint_template(region, "keystone", public_url,
807 admin_url, internal_url)
808
809+
810 def update_user_password(username, password):
811 import manager
812 manager = manager.KeystoneManager(endpoint='http://localhost:35357/v2.0/',
813@@ -430,6 +398,40 @@
814 manager.api.users.update_password(user=user_id, password=password)
815 juju_log("Successfully updated password for user '%s'" % username)
816
817+def load_stored_passwords(path=SERVICE_PASSWD_PATH):
818+ creds = {}
819+ if not os.path.isfile(path):
820+ return creds
821+
822+ stored_passwd = open(path, 'r')
823+ for l in stored_passwd.readlines():
824+ user, passwd = l.strip().split(':')
825+ creds[user] = passwd
826+ return creds
827+
828+def save_stored_passwords(path=SERVICE_PASSWD_PATH, **creds):
829+ with open(path, 'wb') as stored_passwd:
830+ [stored_passwd.write('%s:%s\n' % (u, p)) for u, p in creds.iteritems()]
831+
832+def get_service_password(service_username):
833+ creds = load_stored_passwords()
834+ if service_username in creds:
835+ return creds[service_username]
836+
837+ passwd = subprocess.check_output(['pwgen', '-c', '32', '1']).strip()
838+ creds[service_username] = passwd
839+ save_stored_passwords(**creds)
840+
841+ return passwd
842+
843+def configure_pki_tokens(config):
844+ '''Configure PKI token signing, if enabled.'''
845+ if config['enable-pki'] not in ['True', 'true']:
846+ update_config_block('signing', token_format='UUID')
847+ else:
848+ juju_log('TODO: PKI Support, setting to UUID for now.')
849+ update_config_block('signing', token_format='UUID')
850+
851
852 def do_openstack_upgrade(install_src, packages):
853 '''Upgrade packages from a given install src.'''
854@@ -474,9 +476,84 @@
855 relation_data["private-address"],
856 config["database"]))
857
858- juju_log('Running database migrations for %s' % new_vers)
859 execute('service keystone stop', echo=True)
860- execute('keystone-manage db_sync', echo=True, die=True)
861+ if ((is_clustered() and is_leader()) or
862+ not is_clustered()):
863+ juju_log('Running database migrations for %s' % new_vers)
864+ execute('keystone-manage db_sync', echo=True, die=True)
865+ else:
866+ juju_log('Not cluster leader; snoozing whilst leader upgrades DB')
867+ time.sleep(10)
868 execute('service keystone start', echo=True)
869 time.sleep(5)
870 juju_log('Completed Keystone upgrade: %s -> %s' % (old_vers, new_vers))
871+
872+
873+def is_clustered():
874+ for r_id in (relation_ids('ha') or []):
875+ for unit in (relation_list(r_id) or []):
876+ relation_data = \
877+ relation_get_dict(relation_id=r_id,
878+ remote_unit=unit)
879+ if 'clustered' in relation_data:
880+ return True
881+ return False
882+
883+
884+def is_leader():
885+ status = execute('crm resource show res_ks_vip', echo=True)[0].strip()
886+ hostname = execute('hostname', echo=True)[0].strip()
887+ if hostname in status:
888+ return True
889+ else:
890+ return False
891+
892+
893+def peer_units():
894+ peers = []
895+ for r_id in (relation_ids('cluster') or []):
896+ for unit in (relation_list(r_id) or []):
897+ peers.append(unit)
898+ return peers
899+
900+def oldest_peer(peers):
901+ local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1]
902+ for peer in peers:
903+ remote_unit_no = peer.split('/')[1]
904+ if remote_unit_no < local_unit_no:
905+ return False
906+ return True
907+
908+
909+def eligible_leader():
910+ if is_clustered():
911+ if not is_leader():
912+ juju_log('Deferring action to CRM leader.')
913+ return False
914+ else:
915+ peers = peer_units()
916+ if peers and not oldest_peer(peers):
917+ juju_log('Deferring action to oldest service unit.')
918+ return False
919+ return True
920+
921+
922+def synchronize_service_credentials():
923+ '''
924+ Broadcast service credentials to peers or consume those that have been
925+ broadcasted by peer, depending on hook context.
926+ '''
927+ if os.path.basename(sys.argv[0]) == 'cluster-relation-changed':
928+ r_data = relation_get_dict()
929+ if 'service_credentials' in r_data:
930+ juju_log('Saving service passwords from peer.')
931+ save_stored_passwords(**json.loads(r_data['service_credentials']))
932+ return
933+
934+ creds = load_stored_passwords()
935+ if not creds:
936+ return
937+ juju_log('Synchronizing service passwords to all peers.')
938+ creds = json.dumps(creds)
939+ for r_id in (relation_ids('cluster') or []):
940+ relation_set_2(rid=r_id, service_credentials=creds)
941
942=== modified file 'metadata.yaml'
943--- metadata.yaml 2012-06-07 17:43:42 +0000
944+++ metadata.yaml 2013-02-22 19:26:19 +0000
945@@ -11,3 +11,9 @@
946 requires:
947 shared-db:
948 interface: mysql-shared
949+ ha:
950+ interface: hacluster
951+ scope: container
952+peers:
953+ cluster:
954+ interface: keystone-ha
955
956=== added file 'remove_from_cluster'
957--- remove_from_cluster 1970-01-01 00:00:00 +0000
958+++ remove_from_cluster 2013-02-22 19:26:19 +0000
959@@ -0,0 +1,2 @@
960+#!/bin/bash
961+crm node standby
962
963=== modified file 'revision'
964--- revision 2012-12-12 03:52:01 +0000
965+++ revision 2013-02-22 19:26:19 +0000
966@@ -1,1 +1,1 @@
967-165
968+197
969
970=== added directory 'templates'
971=== added file 'templates/haproxy.cfg'
972--- templates/haproxy.cfg 1970-01-01 00:00:00 +0000
973+++ templates/haproxy.cfg 2013-02-22 19:26:19 +0000
974@@ -0,0 +1,35 @@
975+global
976+ log 127.0.0.1 local0
977+ log 127.0.0.1 local1 notice
978+ maxconn 4096
979+ user haproxy
980+ group haproxy
981+ spread-checks 0
982+
983+defaults
984+ log global
985+ mode http
986+ option httplog
987+ option dontlognull
988+ retries 3
989+ timeout queue 1000
990+ timeout connect 1000
991+ timeout client 10000
992+ timeout server 10000
993+
994+listen stats :8888
995+ mode http
996+ stats enable
997+ stats hide-version
998+ stats realm Haproxy\ Statistics
999+ stats uri /
1000+ stats auth admin:password
1001+
1002+{% for service, port in service_ports.iteritems() -%}
1003+listen {{ service }} 0.0.0.0:{{ port }}
1004+ balance roundrobin
1005+ option tcplog
1006+ {% for unit, address in units.iteritems() -%}
1007+ server {{ unit }} {{ address }}:{{ port - 1 }} check
1008+ {% endfor %}
1009+{% endfor %}

Subscribers

People subscribed via source and target branches

to all changes: